0% found this document useful (0 votes)
7 views113 pages

12- Lecture Notes

The document provides comprehensive lecture notes on Computer Networks for B.Tech students, covering topics such as network hardware, software, OSI and TCP/IP models, and various network topologies. It discusses the uses of computer networks in business, home, and mobile applications, as well as social issues related to networking. Additionally, it classifies networks based on transmission mode, connection type, topology, and size, detailing the characteristics and advantages/disadvantages of each type.

Uploaded by

aprao.803
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views113 pages

12- Lecture Notes

The document provides comprehensive lecture notes on Computer Networks for B.Tech students, covering topics such as network hardware, software, OSI and TCP/IP models, and various network topologies. It discusses the uses of computer networks in business, home, and mobile applications, as well as social issues related to networking. Additionally, it classifies networks based on transmission mode, connection type, topology, and size, detailing the characteristics and advantages/disadvantages of each type.

Uploaded by

aprao.803
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 113

ST.

MARY’S ENGINNERING COLLEGE


B.TECH- CSE(AIML)
III - YEAR, I-SEM, ACADEMIC YEAR: 2023-24
COMPUTER NETWORKS - LECTURE NOTES

UNIT – I
INTRODUCTION

 Network Hardware
 Network Software
 OSI
 TCP/IP Reference Models
 Example Networks - Arpanet
 Internet
PHYSICAL LAYER
 Guided Transmission Media
 Twisted Pairs
 Coaxial Cable
 Fiber Optics
 Wireless Transmission
INTRODUCTION
A network is a set of devices (often referred to as nodes) connected by
communication links. A node can be a computer, printer, or any other device capable
of sending and/or receiving data generated by other nodes on the network.
“Computer network’’ to mean a collection of autonomous computers interconnected
by a single technology. Two computers are said to be interconnected if they are able
to exchange information.
The connection need not be via a copper wire; fiber optics, microwaves, infrared,
and communication satellites can also be used.
Networks come in many sizes, shapes and forms, as we will see later. They
are usually connected together to make larger networks with the Internet being the
most well-known example of a network of networks.
There is considerable confusion in the literature between a computer network and a
distributed system.
The key distinction is that in a distributed system, a collection of independent
computers appears to its users as a single coherent system. Usually, it has a single
model or paradigm that it presents to the users. Often a layer of software on top of
the operating system, called middleware, is responsible for implementing this
model. A well-known example of a distributed system is the World Wide Web.
It runs on top of the Internet and presents a model in which everything looks like a
document (Web page).
USES OF COMPUTER NETWORKS
1. BUSINESS APPLICATIONS
To distribute information throughout the company (resource sharing). sharing
physical resources such as printers, and tape backup systems, is sharing
information client-server model.
It is widely used and forms the basis of much network usage communication
medium among employees. Email (electronic mail), which employees generally
use for a great deal of daily communication.
Telephone calls between employees may be carried by the computer network
instead of by the phone company. This technology is called IP telephony or
Voice over IP (VoIP) when Internet technology is used.
Desktop sharing lets remote workers see and interact with a graphical computer
screen doing business electronically, especially with customers and suppliers.
This new model is called e-commerce (electronic commerce) and it has grown
rapidly in recent years.
2. HOME APPLICATIONS
peer-to-peer communication
person-to-person communication
electronic commerce
entertainment. (game playing,)
3.MOBILE USERS
Text messaging or texting
Smart phones
GPS (Global Positioning System)
m-commerce
NFC (Near Field Communication)
4.SOCIAL ISSUES
With the good comes the bad, as this new-found freedom brings with it many
unsolved social, political, and ethical issues. Social networks, message boards,
content sharing sites.

CLSSIFICSTION OF COMPUTER NETWORKS


1. Based on Transmission Mode
2. Based on Type of Connection
3. Based on Topology
4. Types of Network based on size
1) BASED ON TRANSMISSION MODE

Communication between two devices can be simplex, half-duplex, or full-duplex as


shown in Figure.

Simplex
In simplex mode, the communication is unidirectional, as on a one- way street.
Only one of the two devices on a link can transmit; the other can only receive
(Figure a). Keyboards and traditional monitors are examples of simplex devices.

Half-Duplex
In half-duplex mode, each station can both transmit and receive, but not at the same
time. When one device is sending, the other can only receive, and vice versa (Figure
b). Walkie-talkies and CB (citizens band) radios are both half- duplex systems.
Full-Duplex
In full-duplex, both stations can transmit and receive simultaneously (Figure c). One
common example of full-duplex communication is the telephone network. When
two people are communicating by a telephone line, both can talk and listen at
the same time. The full-duplex mode is used when communication in both
directions is required all the time.
2.BASED ON TYPE OF CONNECTION

There are two possible types of connections: point-to-point and multipoint.


Point-to-Point A point-to-point connection provides a dedicated link between
two devices. The entire capacity of the link is reserved for transmission between
those two devices. Most point-to-point connections use an actual length of wire or
cable to connect the two ends, but other options, such as microwave or satellite
links, are also possible
When you change television channels by infrared remote control, you are
establishing a point-to-point connection between the remote control and the
television's control system.
Multipoint A multipoint (also called multi-drop) connection is one in which more
than two specific devices share a single link
In a multipoint environment, the capacity of the channel is shared, either spatially or
temporally. If several devices can use the link simultaneously, it is a spatially shared
connection. If users must take turns, it is a timeshared connection.
3.BASED ON TOPOLOGY

I) Physical Topology
The term physical topology refers to the way in which a network is laid out
physically.
Two or more devices connect to a link; two or more links form a topology. The
topology of a network is the geometric representation of the relationship of all the
links and linking devices (usually called nodes) to one another.
There are four basic topologies possible: mesh, star, bus, and ring.

MESH TOPOLOGY

A mesh topology is the one where every node is connected to every other node in
the network. A mesh topology can be a full mesh topology or a partially
connected mesh topology.
The number of connections in this network can be calculated using the
following formula (n is the number of computers in the network): n(n-1)/2
In a partially connected mesh topology, at least two of the computers in the network
have connections to multiple other computers in that network. It is an inexpensive
way to implement redundancy in a network. In the event that one of the primary
computers or connections in the network fails, the rest of the network continues to
operate normally.
Advantages of a mesh topology
Can handle high amounts of traffic, because multiple devices can transmit data
simultaneously.
A failure of one device does not cause a break in the network or transmission of
data.
Adding additional devices does not disrupt data transmission between other
devices.
Disadvantages of a mesh topology
The cost to implement is higher than other network topologies, making it a less
desirable option.
Building and maintaining the topology is difficult and time consuming.
STAR TOPOLOGY
A star network, star topology is one of the most common network setups. In this
configuration, every node connects to a central network device, like a hub,
switch, or computer. The central network device acts as a server and the peripheral
devices act as clients. Depending on the type of network card used in each
computer of the star topology, a coaxial cable or a RJ-45 network cable is used to
connect computers together.

Advantages of star topology


Centralized management of the network, through the use of the central
computer, hub, or switch.
Easy to add another computer to the network.
If one computer on the network fails, the rest of the network continues to
function normally.
The star topology is used in local-area networks (LANs), High-speed LANs
often use a star topology with a central hub.
Disadvantages of star topology
Can have a higher cost to implement, especially when using a switch or
router as the central network device.
The central network device determines the performance and number of nodes the
network can handle.
If the central computer, hub, or switch fails, the entire network goes down and all
computers are disconnected from the network.
BUS TOPOLOGY

a line topology, a bus topology is a network setup in which each computer


and network device are connected to a single cable.
Advantages of bus topology
It works well when you have a small network.
It's the easiest network topology for connecting computers or peripherals in a
linear fashion.
It requires less cable length than a star topology. Disadvantages of bus topology
It can be difficult to identify the problems if the whole network goes down.
It can be hard to troubleshoot individual device issues.
Bus topology is not great for large networks.
Terminators are required for both ends of the main cable.
Additional devices slow the network down.
If a main cable is damaged, the network fails or splits into two.

RING TOPOLOGY

A ring topology is a network configuration in which device connections create a


circular data path. In a ring network, packets of data travel from one device to the
next until they reach their destination. Most ring topologies allow packets to travel
only in one direction, called a unidirectional ring network. Others permit data to
move in either direction, called bidirectional.
The major disadvantage of a ring topology is that if any individual connection in the
ring is broken, the entire network is affected.
Ring topologies may be used in either local area networks (LANs) or wide area
networks (WANs).
Advantages of ring topology
All data flows in one direction, reducing the chance of packet collisions.
A network server is not needed to control network connectivity between each
workstation.
Data can transfer between workstations at high speeds.
Additional workstations can be added without impacting performance of the
network.

Disadvantages of ring topology


All data being transferred over the network must pass through each
workstation on the network, which can make it slower than a star topology.
The entire network will be impacted if one workstation shuts down.
The hardware needed to connect each workstation to the network is more
expensive than Ethernet cards and hubs/switches.
II) HYBRID TOPOLOGY
A network can be hybrid. For example, we can have a main star topology with
each branch connecting several stations in a bus topology as shown in Figure

4.TYPES OF NETWORK BASED ON SIZE


The types of network are classified based upon the size, the area it covers and its
physical architecture. The three primary network categories are LAN, WAN and
MAN. Each network differs in their characteristics such as distance, transmission
speed, cables and cost.
Basic types
LAN (Local Area Network)
Group of interconnected computers within a small area. (room,
building, campus)
Two or more pc's can from a LAN to share files, folders, printers, applications
and other devices.
Coaxial or CAT 5 cables are normally used for connections. Due to short distances,
errors and noise are minimum.
Data transfer rate is 10 to 100 mbps. Example: A computer lab in a school. MAN
(Metropolitan Area Network) Design to extend over a large area.
Connecting number of LAN's to form larger network, so that resources can be
shared.
Networks can be up to 5 to 50 km. Owned by organization or individual. Data
transfer rate is low compare to LAN.
Example: Organization with different branches located in the city.
WAN (Wide Area Network)
Are country and worldwide network. Contains multiple LAN's and MAN's.
Distinguished in terms of geographical range. Uses satellites and microwave relays.
Data transfer rate depends upon the ISP provider and varies over the location. Best
example is the internet.
Other types
WLAN (Wireless LAN)
A LAN that uses high frequency radio waves for communication. Provides short
range connectivity with high speed data transmission.
PAN (Personal Area Network)
Network organized by the individual user for its personal use.
SAN (Storage Area Network) Connects servers to data storage devices via
fiber-optic cables.

NETWORK HARDWARE
Network hardware is a set of physical or network devices that are essential for interaction and
communication between hardware units operational on a computer network. These are
dedicated hardware components that connect to each other and enable a network to function
effectively and efficiently.
Network devices, also known as networking hardware, are physical devices that allow
hardware on a computer network to communicate and interact with one another. For example,
Repeater, Hub, Bridge, Switch, Routers, Gateway, and NIC, etc.
Network hardware plays a key role as industries grow as it supports scalability. It integrates
any number of components depending on the enterprise’s needs. Network hardware helps
establish an effective mode of communication, thereby improving the business standards. It
also promotes multiprocessing and enables sharing of resources, information, and software
with ease.
Network equipment is part of advancements of the Ethernet network protocol and utilizes a
twisted pair or fiber cable as a connection medium. Routers, hubs, switches, and bridges are
some examples of network hardware.
MODEMS: A modem enables a computer to connect to the internet via a telephone line. The
modem at one end converts the computer’s digital signals into analog signals and sends them
through a telephone line. At the other end, it converts the analog signals to digital signals that
are understandable for another computer.
ROUTERS: A router connects two or more networks. One common use of the router is to
connect a home or office network (LAN) to the internet (WAN). It generally has a plugged-in
internet cable along with cables that connect computers on the LAN. Alternatively, a LAN
connection can also be wireless (Wi-Fi-enabled), making the network device wireless. These
are also referred to as wireless access points (WAPs).
HUBS, BRIDGES, AND SWITCHES: Hubs, bridges, and switches are connecting units
that allow multiple devices to connect to the router and enable data transfer to all devices on a
network.
HUBS: A hub broadcasts data to all devices on a network. As a result, it consumes a lot of
bandwidth as many computers might not need to receive the broadcasted data. The hub could
be useful in linking a few gaming consoles in a local multiplayer game via a wired or
wireless LAN.
BRIDGES: A bridge connects two separate LAN networks. It scans for the receiving device
before sending a message. This implies that it avoids unnecessary data transfers if the
receiving device is not there. Moreover, it also checks to see whether the receiving device has
already received the message. These practices improve the overall performance of the
network.
SWITCHES: A switch is more powerful than a hub or a bridge but performs a similar role.
It stores the MAC addresses of network devices and transfers data packets only to those
devices that have requested Thus, when the demand is high, a switch becomes more efficient
as it reduces the amount of latency.
NETWORK INTERFACE CARDS: A network interface card (NIC) is a hardware unit
installed on a computer, which allows it to connect to a network. It is typically in the form of
a circuit board or chip. In most modern machines, NICs are built into the motherboards, while
in some computers, an extra expansion card in the form of a small circuit board is added
externally.
NETWORK CABLES: Cables connect different devices on a network. Today, most
networks have cables over a wireless connection as they are more secure, i.e., less prone to
attacks, and at the same time carry larger volumes of data per second.
FIREWALL: A firewall is a hardware or software device between a computer and the rest of
the network open to attackers or hackers. Thus, a LAN can be protected from hackers by
placing a firewall between the LAN and the internet connection. A firewall allows authorized
connections and data-like emails or web pages to pass through but blocks unauthorized
connections made to a computer or LAN.

Bluetooth PAN configuration.

Access
point
To wired network Etherne
Port t To rest
s of
Wireless and wired LANs. (a) 802.11. (b) Switched Ethernet.

Junction

box

Antenna

Head end

Internet

A metropolitan area network based on cable TV.

Subnet
Router Transmission line

WAN that connects three branch offices in Australia.

NETWORK SOFTWARE
Network software is an umbrella term used to describe a wide range of software that
streamlines the operations, design, monitoring, and implementation of computer networks.
Network software is a fundamental element for any networking system. It helps
administrators and security personnel reduce network complexities, and manage, monitor,
and better control network traffic.
Network software plays a crucial role in managing a network infrastructure and simplifying
IT operations by facilitating communication, security, content, and data sharing.
The first computer networks were designed with the hardware as the main concern and the
software as an afterthought. This strategy no longer works. Network software is now highly
structured.
 Protocol Hierarchies
 Design Issues for the Layers
 Connection-Oriented Versus Connectionless Service
 Service Primitives
 The Relationship of Services to Protocols

FUNCTIONS OF NETWORK SOFTWARE


User management allows administrators to add or remove users from the network. This is
particularly useful when hiring or relieving
File management lets administrators decide the location of data storage and control user
access to that data.
Access enables users to enjoy uninterrupted access to network resources.
Network security systems assist administrators in looking after security and preventing data
breaches.
KEY COMPONENTS OF NETWORK SOFTWARE
Network software is an advanced, robust, and secure alternative to traditional networking,
making the network easier to administer in terms of management, modifications,
configuration, supply resources, and troubleshooting. The use of network software makes it
possible to administer from one centralized user interface while completely eliminating the
need to acquire additional hardware.
1. APPLICATION LAYER
The first component is the application layer or the application plane, which refers to the
applications and services running on the network. It is a program that conveys network
information, the status of the network, and the network requirements for particular resource
availability and application.
This is done through the control layer via application programming interfaces (APIs). The
application layer also consists of the application logic and one or more API drivers.
2. CONTROL LAYER
The control layer lies at the center of the architecture and is one of the most important
components of the three layers. You could call it the brain of the whole system.
Also called the controller or the control plane, this layer also includes the network control
software and the network operating system within it. It is the entity in charge of receiving
requirements from the applications and translating the same to the network components.
The control of the infrastructure layer or the data plane devices is also done via the controller.
In simple terms, the control layer is the intermediary that facilitates communication between
the top and bottom layers through APIs interfaces.
3. INFRASTRUCTURE LAYER
The infrastructure layer, also called the data plane, consists of the actual network devices
(both physical and virtual) that reside in this layer. They are primarily responsible for moving
or forwarding the data packets after receiving due instructions from the control layer.
In simple terms, the data plane in the network architecture components physically handles
user traffic based on the commands received by the controller.

There are numerous types of network software available, with most of them being
categorized under the communications and security arena. The varieties of network software
differ based on their key features and costs.
The main role of network software is to eliminate the dependence on hardware by
streamlining communications across multiple devices, locations, and systems.
REFERENCE MODELS
We will discuss two important network architectures: the OSI reference model and the
TCP/IP reference model. Although the protocols associated with the OSI model are not used
any more, the model itself is actually quite general and still valid, and the features discussed
at each layer are still very important. The TCP/IP model has the opposite properties the
model itself is not of much use but the protocols are widely used.
THE OSI REFERENCE MODEL

This model is based on a proposal developed by the International Standards Organization


(ISO) as a first step toward international standardization of the protocols used in the various
layers (Day and Zimmermann, 1983). It was revised in 1995 (Day, 1995).
The model is called the ISO OSI (Open Systems Interconnection) Reference Model
because it deals with connecting open system that is, systems that are open for
communication with other systems. We will just call it the OSI model for short.
The OSI model has seven layers. The principles that were applied to arrive at the seven
layers can be briefly summarized as follows:
A layer should be created where a different abstraction is needed.
Each layer should perform a well-defined function.
The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
PHYSICAL LAYER
Deals with all aspects of physically moving data from one computer to the next.
Converts data from the upper layers into 1s and 0s for transmission over media
Defines how data is encoded onto the media to transmit the data
Defined on this layer: Cable standards, wireless standards, and fiber optic standards.
Copper wiring, fiber optic cable, radio frequencies, anything that can be used to
transmit data is defined on the Physical layer of the OSI Model
Device example: Hub
Used to transmit data
DATA LINK LAYER
Is responsible for moving frames from node to node or computer to computer
Can move frames from one adjacent computer to another, cannot move frames
across routers
Encapsulation = frame
Requires MAC address or physical address
Protocols defined include Ethernet Protocol and Point-to-Point Protocol (PPP)
Device example: Switch
Two sublayers: Logical Link Control (LLC) and the Media Access Control (MAC)
Logical Link Control (LLC)
–Data Link layer addressing, flow control, address notification, error control
Media Access Control (MAC)
–Determines which computer has access to the network media at any given
time
–Determines where one frame ends and the next one starts, called frame
synchronization
NETWORK LAYER
Responsible for moving packets (data) from one end of the network to the
other, called end-to-end communications
Requires logical addresses such as IP addresses
Device example: Router
–Routing is the ability of various network devices and their related software to
move data packets from source to destination
TRANSPORT LAYER
Takes data from higher levels of OSI Model and breaks it into segments that can be
sent to lower-level layers for data transmission
Conversely, reassembles data segments into data that higher-level protocols and
applications can use
Also puts segments in correct order (called sequencing) so they can be
reassembled in correct order at destination
Concerned with the reliability of the transport of sent data
May use a connection-oriented protocol such as TCP to ensure destination received
segments
May use a connectionless protocol such as UDP to send segments without
assurance of delivery
Uses port addressing
SESSION LAYER
Responsible for managing the dialog between networked devices
Establishes, manages, and terminates connections
Provides duplex, half-duplex, or simplex communications between devices.
Provides procedures for establishing check points, adjournment, termination, and
restart or recovery procedures.
PRESENTATION LAYER
Concerned with how data is presented to the network
Handles three primary tasks: Translation, Compression, Encryption

APPLICATION LAYER
Contains all services or protocols needed by application software or operating
system to communicate on the network
Examples
–Firefox web browser uses HTTP (Hyper-Text Transport Protocol)
–E-mail program may use POP3 (Post Office Protocol version 3) to read e-mails
and SMTP (Simple Mail Transport Protocol) to send e-mails

The interaction between layers in the OSI model


TCP/IP MODEL
TCP/IP stands for Model (Transmission Control Protocol/Internet Protocol).
A protocol suite is a large number of related protocols that work together to
allow networked computers to communicate.
Whereas OSI reference model to the reference model used in the grandparent of all wide area
computer networks, the ARPANET worldwide Internet.
Relationship of layers and addresses in TCP/IP

APPLICATION LAYER
Application layer protocols define the rules when implementing specific network
applications
Rely on the underlying layers to provide accurate and efficient data delivery
Typical protocols:
FTP – File Transfer Protocol
For file transfer
Telnet – Remote terminal protocol
For remote login on any other computer on the network
SMTP – Simple Mail Transfer Protocol
For mail transfer
HTTP – Hypertext Transfer Protocol
For Web browsing
Encompasses same functions as these OSI Model Layers
Application, presentation and Session
TRANSPORT LAYER
TCP is a connection-oriented protocol
Does not mean it has a physical connection between sender and receiver
TCP provides the function to allow a connection virtually exists – also called
virtual circuit
UDP provides the functions:
Dividing a chunk of data into segments
Reassembly segments into the original chunk
Provide further the functions such as reordering and data resend
Offering a reliable byte-stream delivery service
Functions the same as the Transport layer in OSI
Synchronize source and destination computers to set up the session between the
respective computers
INTERNET LAYER
The network layer, also called the internet layer, deals with packets and connects
independent networks to transport the packets across network boundaries. The
network layer protocols are the IP and the Internet Control Message Protocol
(ICMP), which is used for error reporting.
HOST-TO-NETWORK LAYER
The Host-to-network layer is the lowest layer of the TCP/IP reference model. It
combines the link layer and the physical layer of the ISO/OSI model. At this
layer, data is transferred between adjacent network nodes in a WAN or between
nodes on the same LAN.
EXAMPLE NETWORKS
The subject of computer networking covers many different kinds of networks, large and
small, well known and less well known.
We will start with the Internet, probably the best known network, and look at its history,
evolution, and technology. Then we will consider the mobile phone network. Technically, it
is quite different from the Internet, contrasting nicely with it. Next we will introduce IEEE
802.11, the dominant standard for wireless LANs. Finally, we will look at RFID and sensor
networks, technologies that extend the reach of the network to include the physical world and
everyday objects.
THE INTERNET

The Internet has revolutionized many aspects of our daily lives. It has affected the
way we do business as well as the way we spend our leisure time. Count the
ways you've used the Internet recently.
Perhaps you've sent electronic mail (e-mail) to a business associate, paid a utility
bill, read a newspaper from a distant city, or looked up a local movie schedule-all by
using the Internet.
Or maybe you researched a medical topic, booked a hotel reservation, chatted
with a fellow Trekkie, or comparison-shopped for a car.
The Internet is a communication system that has brought a wealth of information
to our fingertips and organized it for our use.
A BRIEF HISTORY
A network is a group of connected communicating devices such as computers and
printers. An internet (note the lowercase letter i) is two or more networks that
can communicate with each other.
The most notable internet is called the Internet (uppercase letter I), a collaboration of
more than hundreds of thousands of interconnected networks.
Private individuals as well as various organizations such as government agencies,
schools, research facilities, corporations, and libraries in more than 100 countries use
the Internet. Millions of people are users. Yet this extraordinary communication
system only came into being in 1969.
ARPANET
In the mid-1960s, mainframe computers in research organizations were standalone
devices. Computers from different manufacturers were unable to communicate with
one another.
The Advanced Research Projects Agency (ARPA) in the Department of Defense
(DoD) was interested in finding a way to connect computers so that the researchers
they funded could share their findings, thereby reducing costs and eliminating
duplication of effort.
In 1967, at an Association for Computing Machinery (ACM) meeting, ARPA
presented its ideas for ARPANET, a small network of connected computers. The
idea was that each host computer (not necessarily from the same manufacturer)
would be attached to a specialized computer, called an interface message processor
(IMP).
The IMPs, in tum, would be connected to one another. Each IMP had to be able to
communicate with other IMPs as well as with its own attached host.
By 1969, ARPANET was a reality. Four nodes, at the University of California at Los
Angeles (UCLA), the University of California at Santa Barbara (UCSB), Stanford
Research Institute (SRI), and the University of Utah, were connected via the IMPs
to form a network. Software called the Network Control Protocol (NCP) provided
communication between the hosts.
In 1972, Vint Cerf and Bob Kahn, both of whom were part of the core ARPANET
group, collaborated on what they called the Internetting Project.
Cerf and Kahn's landmark 1973 paper outlined the protocols to achieve end- to-end
delivery of packets. This paper on Transmission Control Protocol (TCP) included
concepts such as encapsulation, the datagram, and the functions of a gateway.
Shortly thereafter, authorities made a decision to split TCP into two protocols:
Transmission Control Protocol (TCP) and Internetworking Protocol (lP).
IP would handle datagram routing while TCP would be responsible for higher-level
functions such as segmentation, reassembly, and error detection. The
internetworking protocol became known as TCPIIP.
The Internet Today
The Internet has come a long way since the 1960s. The Internet today is not a simple
hierarchical structure. It is made up of many wide- and local-area networks joined by
connecting devices and switching stations.
It is difficult to give an accurate representation of the Internet because it is
continually changing-new networks are being added, existing networks are adding
addresses, and networks of defunct companies are being removed. Today most end
users who want Internet connection use the services of Internet service providers
(lSPs).
There are international service providers, national service providers, regional service
providers, and local service providers. The Internet today is run by private
companies, not the government. Figure shows a conceptual (not geographic) view of
the Internet.
International Internet Service Providers:
At the top of the hierarchy are the international service providers that connect nations
together.
National Internet Service Providers:
The national Internet service providers are backbone networks created and
maintained by specialized companies.
There are many national ISPs operating in North America; some of the most well-
known are Sprint Link, PSINet, UUNet Technology, AGIS, and internet Mel.
To provide connectivity between the end users, these backbone networks are
connected by complex switching stations (normally run by a third party) called
network access points (NAPs).
Some national ISP networks are also connected to one another by private switching
stations called peering points. These normally operate at a high data rate (up to 600
Mbps).
Regional Internet Service Providers:
Regional internet service providers or regional ISPs are smaller ISPs that are
connected to one or more national ISPs. They are at the third level of the hierarchy
with a smaller data rate.
Local Internet Service Providers:
Local Internet service providers provide direct service to the end users. The local
ISPs can be connected to regional ISPs or directly to national ISPs.
Most end users are connected to the local ISPs. Note that in this sense, a local ISP
can be a company that just provides Internet services, a corporation with a
network that supplies services to its own employees, or a nonprofit organization,
such as a college or a university, that runs its own network. Each of these local ISPs
can be connected to a regional or national service provider.

PHYSICAL LAYER
A transmission medium can be broadly defined as anything that can carry
information from a source to a destination.
Classes of transmission media

GUIDED MEDIA
Media are roughly grouped into guided media, such as copper wire and fiber optics, and
unguided media, such as terrestrial wireless, satellite, and lasers through the air.
Guided media, which are those that provide a medium from one device to
another, include twisted-pair cable, coaxial cable, and fiber-optic cable. Each one has
its own niche in terms of bandwidth, delay, cost, and ease of installation and maintenance.
TWISTED-PAIR CABLE
A twisted pair consists of two conductors (normally copper), each with its own
plastic insulation, twisted together. One of the wires is used to carry signals to the
receiver, and the other is used only as a ground reference.
Unshielded Versus Shielded Twisted-Pair Cable
The most common twisted-pair cable used in communications is referred to as
unshielded twisted-pair (UTP).
STP cable has a metal foil or braided mesh covering that encases each pair of
insulated conductors. Although metal casing improves the quality of cable by
preventing the penetration of noise or crosstalk, it is bulkier and more expensive.

The most common UTP connector is RJ45 (RJ stands for registered jack)

Applications

Twisted-pair cables are used in telephone lines to provide voice and data channels.

Local-area networks, such as l0Base-T and l00Base-T, also use twisted-pair cables.

COAXIAL CABLE
Coaxial cable (or coax) carries signals of higher frequency ranges than those in
twisted pair cable.
Coax has a central core conductor of solid or stranded wire (usually copper)
enclosed in an insulating sheath, which is, in turn, encased in an outer conductor
of metal foil, braid, or a combination of the two.
The outer metallic wrapping serves both as a shield against noise and as the second
conductor, which completes the circuit.
This outer conductor is also enclosed in an insulating sheath, and the whole cable is
protected by a plastic cover.
The most common type of connector used today is the Bayone Neill Concelman
(BNe) connector.
Applications
Coaxial cable was widely used in analog telephone networks, digital telephone
networks
Cable TV networks also use coaxial cables.
Another common application of coaxial cable is in traditional Ethernet LANs

FIBER-OPTIC CABLE
A fiber-optic cable is made of glass or plastic and transmits signals in the form of
light. Light travels in a straight line as long as it is moving through a single
uniform substance.
If a ray of light traveling through one substance suddenly enters another
substance (of a different density), the ray changes direction.
Bending of light ray
Optical fibers use reflection to guide light through a channel. A glass or plastic
core is surrounded by a cladding of less dense glass or plastic.

PROPAGATION MODES
Multimode is so named because multiple beams from a
light

source move through the core in different paths. How these beams move within the
cable depends on the structure of the core, as shown in Figure.
In multimode step-index fiber, the density of the core remains constant from the
center to the edges. A beam of light moves through this constant density in a
straight line until it reaches the interface of the core and the cladding. The term step
index refers to the suddenness of this change, which contributes to the distortion of
the signal as it passes through the fiber.
A second type of fiber, called multimode graded-index fiber, decreases this
distortion of the signal through the cable. The word index here refers to the index of
refraction.
Single-Mode: Single-mode uses step-index fiber and a highly focused source of light
that limits beams to a small range of angles, all close to the horizontal.

FIBER CONSTRUCTION
The subscriber channel (SC) connector, the straight-tip (ST) connector, MT-

RJ (mechanical transfer registered jack) is a connector


APPLICATIONS
Fiber-optic cable is often found in back bone networks because its wide
bandwidth is cost-effective.
Some cable TV companies use a combination of optical fiber and coaxial cable,
thus creating a hybrid network.
Local-area networks such as 100Base-FX network (Fast Ethernet)
and 1000Base-X also use fiber-optic cable
Advantages and Disadvantages of Optical Fiber
Advantages Fiber-optic cable has several advantages over metallic cable
(twisted pair or coaxial).
Higher bandwidth.
Less signal attenuation. Fiber-optic transmission distance is significantly greater than
that of other guided media. A signal can run for 50 km without requiring
regeneration. Light weight. Fiber-optic cables are much lighter than copper cables.
Greater immunity to tapping.
Disadvantages
Installation and maintenance
Unidirectional light propagation. Propagation of light is unidirectional. If we need
bidirectional communication, two fibers are needed.
Cost. The cable and the interfaces are relatively more expensive than those of other
guided media.

WIRELESS TRANSMISSION
Unguided media transport electromagnetic waves without using a physical
conductor. This type of communication is often referred to as wireless
communication.
 Radio Waves
 Microwaves
 Infrared

Unguided signals can travel from the source to destination in several ways: ground
propagation, sky propagation, and line-of-sight propagation, as shown in Figure
RADIO WAVES
Electromagnetic waves ranging in frequencies between 3 kHz and 1 GHz are
normally called radio waves. Radio waves are Omni directional.
When an antenna transmits radio waves, they are propagated in all directions. This
means that the sending and receiving antennas do not have to be aligned.
A sending antenna sends waves that can be received by any receiving antenna.
The Omni directional property has a disadvantage, too.
The radio waves transmitted by one antenna are susceptible to interference by
another antenna that may send signals using the same frequency or band.
Applications
The Omni directional characteristics of radio waves make them useful for
multicasting, in which there is one sender but many receivers. AM and FM radio,
television, maritime radio, cordless phones, and paging are examples of multicasting.
MICROWAVES
Electromagnetic waves having frequencies between 1 and 300 GHz are called
microwaves.
Microwaves are unidirectional. The sending and receiving antennas need to be
aligned. The unidirectional property has an obvious advantage.
A pair of antennas can be aligned without interfering with another pair of aligned
antennas.
Applications
Microwaves are used for unicast communication such as cellular telephones,
satellite networks, and wireless LANs.
INFRARED
Infrared waves, with frequencies from 300 GHz to 400 THz (wavelengths from 1
mm to 770 nm), can be used for short-range communication. Infrared waves, having
high frequencies, cannot penetrate walls.
This advantageous characteristic prevents interference between one system and
another; a short- range communication system in one room cannot be affected by
another system in the next room.
When we use our infrared remote control, we do not interfere with the use of the
remote by our neighbors. Infrared signals useless for long-range communication.
In addition, we cannot use infrared waves outside a building because the sun's rays
contain infrared waves that can interfere with the communication.
Applications
Infrared signals can be used for short-range communication in a closed area
using line-of-sight propagation.
UNIT – II
DATA LINK LAYER
 Design issues, framing
 Error detection and correction
ELEMENTARY DATA LINK PROTOCOLS
 simplex protocol
 A simplex stop and wait protocol for an error-free
channel
 A simplex stop and wait protocol for noisy channel
SLIDING WINDOW PROTOCOLS:
 A one-bit sliding window protocol
 A protocol using Go-Back-N
 A protocol using Selective Repeat
 Example data link protocols
MEDIUM ACCESS SUB LAYER
 The channel allocation problem
MULTIPLE ACCESS PROTOCOLS
 ALOHA
 Carrier sense multiple access protocols
 collision free protocols
 Wireless LANs
 Data link layer switching

DATA LINK LAYER FUNCTIONS (SERVICES)

1) Providing services to the network layer


Unacknowledged connectionless service.
Appropriate for low error rate and real-time traffic. Ex: Ethernet
Acknowledged connectionless service.
Useful in unreliable channels, Wi-Fi. Ack/Timer/Resend
Acknowledged connection-oriented service.
Guarantee frames are received exactly once and in the right order. Appropriate over
long, unreliable links such as a satellite channel or a long- distance telephone circuit
2)Framing: Frames are the streams of bits received from the network layer into
manageable data units. This division of stream of bits is done by Data Link
Layer.
Physical Addressing: The Data Link layer adds a header to the frame in order to
define physical address of the sender or receiver of the frame, if the frames are to be
distributed to different systems on the network.
3)Flow Control: A receiving node can receive the frames at a faster rate than it
can process the frame. Without flow control, the receiver's buffer can overflow,
and frames can get lost. To overcome this problem, the data link layer uses the flow
control to prevent the sending node on one side of the link from overwhelming
the receiving node on another side of the link. This prevents traffic jam at the
receiver side.
4) Error Control: Error control is achieved by adding a trailer at the end of the
frame. Duplication of frames are also prevented by using this mechanism. Data Link
Layers adds mechanism to prevent duplication of frames.
5) Error detection: Errors can be introduced by signal attenuation and noise. Data
Link Layer protocol provides a mechanism to detect one or more errors. This is
achieved by adding error detection bits in the frame and then receiving node can
perform an error check.
6) Error correction: Error correction is similar to the Error detection, except that
receiving node not only detects the errors but also determine where the errors
have occurred in the frame.
7) Access Control: Protocols of this layer determine which of the devices has
control over the link at any given time, when two or more devices are connected to
the same link.
8) Reliable delivery: Data Link Layer provides a reliable delivery service, i.e.,
transmits the network layer datagram without any error. A reliable delivery service is
accomplished with transmissions and acknowledgements. A data link layer mainly
provides the reliable delivery service over the links as they have higher error rates
and they can be corrected locally, link at which an error occurs rather than forcing to
retransmit the data.
9) Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can
transmit the data at the same time. In a Half-Duplex mode, only one node can
transmit the data at the same time.
FRAMING

To provide service to the network layer, the data link layer must use the service
provided to it by the physical layer. What the physical layer does is accept a raw bit
stream and attempt to deliver it to the destination. This bit stream is not guaranteed
to be error free. The number of bits received may be less than, equal to, or more
than the number of bits transmitted, and they may have different values. It is up to
the data link layer to detect and, if necessary, correct errors. The usual approach is
for the data link layer to break the bit stream up into discrete frames and compute the
checksum for each frame (framing). When a frame arrives at the destination, the
checksum is recomputed. If the newly computed checksum is different from the
one contained in the frame, the data link layer knows that an error has occurred and
takes steps to deal with it (e.g., discarding the bad frame and possibly also sending
back an error report).
We will look at four framing methods:
 Character count.
 Flag bytes with byte stuffing.
 Starting and ending flags, with bit stuffing.
 Physical layer coding violations.
Character count method uses a field in the header to specify the number of
characters in the frame. When the data link layer at the destination sees the character
count, it knows how many characters follow and hence where the end of the frame is.
This technique is shown in Fig. (a) For four frames of sizes 5, 5, 8, and 8 characters,
respectively.

A character stream. (a) Without errors. (b) With one error


Flag bytes with byte stuffing method gets around the problem of resynchronization
after an error by having each frame start and end with special bytes. In the past, the
starting and ending bytes were different, but in recent years most protocols have used
the same byte, called a flag byte, as both the starting and ending delimiter, as shown
in Fig. (a) as FLAG.
In this way, if the receiver ever loses synchronization, it can just search for the flag
byte to find the end of the current frame. Two consecutive flag bytes indicate the end
of one frame and start of the next one.

A frame delimited by flag bytes (b) Four examples of byte sequences before and
after byte stuffing
Starting and ending flags, with bit stuffing allows data frames to contain an
arbitrary number of bits and allows character codes with an arbitrary number of bits
per character. It works like this. Each frame begins and ends with a special bit
pattern, 01111110 (in fact, a flag byte).
Whenever the sender's data link layer encounters five consecutive 1s in the data, it
automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is analogous
to byte stuffing, in which an escape byte is stuffed into the outgoing character stream
before a flag byte in the data.
When the receiver sees five consecutive incoming 1 bit, followed by a 0 bit, it
automatically de- stuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely
transparent to the network layer in both computers, so is bit stuffing. If the user
data contain the flag pattern, 01111110, this flag is transmitted as 011111010
but stored in the receiver's memory as 01111110.
Fig: Bit stuffing. (a) The original data. (b) The data as they appear on the line.

(c) The data as they are stored in the receiver's memory after destuffing.

Physical layer coding violations method of framing is only applicable to


networks in which the encoding on the physical medium contains some
redundancy. For example, some LANs encode 1 bit of data by using 2 physical
bits. Normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair.
The scheme means that every data bit has a transition in the middle, making it
easy for the receiver to locate the bit boundaries. The combinations high- high
and low-low are not used for data but are used for delimiting frames in some
protocols.

ERROR DETECTION

Error is a condition when the receiver’s information does not match with the
sender’s information. During transmission, digital signals suffer from noise that
can introduce errors in the binary bits travelling from sender to receiver. That
means a 0 bit may change to 1 o r a 1 b i t m a y c h a n g e t o 0 .
Error Detecting Codes (Implemented either at Data link layer or Transport
Layerof OSI Model) Whenever a message is transmitted, it may get
scrambled by noise or data may get corrupted. To avoid this, we use error-
detecting codes which are additional data added to a given digital message to
help us detect if any error has occurred during transmission of the message.
Basic approach used for error detection is the use of redundancy bits, where
additional bits are added to facilitate detection of errors. Some popular techniques
for error detection are:
1. Simple Parity check
2. Two-dimensional Parity check
3. Checksum
4. Cyclic redundancy check
1) Simple Parity check
Blocks of data from the source are subjected to a check bit or parity bit generator
form, where a parity of: 1 is added to the block if it contains odd number of 1’s, and
0 is added if it contains even number of 1’sThis scheme makes the total number of
1’s even, that is why it is called even parity checking.

2) Two-dimensional Parity check


Parity check bits are calculated for each row, which is equivalent to a simple parity
check bit. Parity check bits are also calculated for all columns, then both are sent
along with the data. At the receiving end these are compared with the parity bits
calculated on the received data.
3) Checksum
In checksum error detection scheme, the data is divided into k segments each
of m bits. In the sender’s end the segments are added using 1’s
complement arithmetic to get the sum. The sum is complemented to get the
checksum. The checksum segment is sent along with the data segments.
At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
4) Cyclic redundancy checks (CRC)
Unlike checksum scheme, which is based on addition, CRC is based on binary
division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by the same number. Ifat this
step there is no remainder, the data unit is assumed to be correct and is therefore
accepted.
A remainder indicates that the data unit has been damaged in transit and
therefore must be rejected.
ERROR CORRECTION
Error Correction codes are used to detect and correct the errors when data is
transmitted from the sender to the receiver.
Error Correction can be handled in two ways:
Backward error correction: Once the error is discovered, the receiver requests
the sender to retransmit the entire data unit.
Forward error correction: In this case, the receiver uses the error-correcting
code which automatically corrects the errors.
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For
example, If we want to calculate a single-bit error, the error correction code will
determine which one of seven bits is in error. To achieve this, we have to add some
additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data bits.
The number of redundant bits’ r can be calculated by using the formula:
r
2 >=d+r+1

The value of r is calculated by using the above formula. For example, if the value of
d is 4, then the possible smallest value that satisfies the above relation would be 3.
To determine the position of the bit which is in error, a technique developed by
R.W Hamming is Hamming code which can be applied to any length of the data
unit and uses the relationship between data units and redundant units.
HAMMING CODE
Parity bits: The bit which is appended to the original data of binary bits so that
the total number of 1s is even or odd.
Even parity: To check for even parity, if the total number of 1s is even, then the
value of the parity bit is 0. If the total number of 1s occurrences is odd, then the
value of the parity bit is 1.
Odd Parity: To check for odd parity, if the total number of 1s is even, then the
value of parity bit is 1. If the total number of 1s is odd, then the value of parity bit
is 0.

Algorithm of Hamming code:


An information of 'd' bits are added to the redundant bits 'r' to form d+r. The
location of each of the (d+r) digits is assigned a decimal value.
The 'r' bits are placed in the positions 1,2, 2k-1
At the receiving end, the parity bits are recalculated. The decimal value of the
parity bits determines the position of an error.
Relationship b/w Error position & binary number
Let's understand the concept of Hamming code through an example: Suppose the

original data is 1010 which is to be sent.


Total number of data bits 'd' = 4
r
Number of redundant bits r: 2 >= d+r+1

r>
2 = 4+r+1

Therefore, the value of r is 3 that satisfies the above relation. Total number of bits
= d+r = 4+3 = 7;

Determining the position of the redundant bits


The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The
position of the redundant bits is calculated with corresponds to the raised power of 2.
Therefore, their corresponding positions are 1, 21, 22.
The position of r1 = 1, The position of r2 = 2, The position of r4 = 4

Representation of Data on the addition of parity bits:

Determining the Parity bits


Determining the r1 bit: The r1 bit is calculated by performing a parity check on
the bit positions whose binary representation includes 1 in the first position.

We observe from the above figure that the bit position that includes 1 in the first

position are 1, 3, 5, 7. Now, we perform the even-parity check at these bit


positions. The total number of 1 at these bit positions corresponding to r1 is even,
therefore, the value of the r1 bit is 0.
Determining r2 bit: The r2 bit is calculated by performing a parity check on the bit
positions whose binary representation includes 1 in the second position

We observe from the above figure that the bit positions that includes 1 in the second
position are 2, 3, 6, 7. Now, we perform the even-parity check at these bit
positions. The total number of 1 at these bit positions corresponding to r2 is odd,
therefore, the value of the r2 bit is 1.
Determining r4 bit: The r4 bit is calculated by performing a parity check on the bit
positions whose binary representation includes 1 in the third position.
We observe from the above figure that the bit positions that includes 1 in the
third position are 4, 5, 6, 7. Now, we perform the even-parity check at these bit
positions. The total number of 1 at these bit positions corresponding to r4 is even,
therefore, the value of the r4 bit is 0.

ELEMENTARY DATA LINK PROTOCOLS


Protocols in the data link layer are designed so that this layer can perform its basic functions:
framing, error control and flow control. Framing is the process of dividing bit - streams from
physical layer into data frames whose size ranges from a few hundred to a few thousand
bytes.
Error control mechanisms deals with transmission errors and retransmission of corrupted and
lost frames. Flow control regulates speed of delivery and so that a fast sender does not drown
a slow receiver.

Simplex Protocol
The Simplex protocol is hypothetical protocol designed for unidirectional data transmission
over an ideal channel, i.e. a channel through which transmission can never go wrong. It has
distinct procedures for sender and receiver. The sender simply sends all its data available
onto the channel as soon as they are available its buffer. The receiver is assumed to process
all incoming data instantly. It is hypothetical since it does not handle flow control or error
control.
Stop – and – Wait Protocol
Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional data
transmission without any error control facilities. However, it provides for flow control so that
a fast sender does not drown a slow receiver. The receiver has a finite buffer size with finite
processing speed. The sender can send a frame only when it has received indication from the
receiver that it is available for further data processing.
Stop – and – Wait ARQ
Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a variation of the
above protocol with added error control mechanisms, appropriate for noisy channels. The
sender keeps a copy of the sent frame. It then waits for a finite time to receive a positive
acknowledgement from receiver. If the timer expires or a negative acknowledgement is
received, the frame is retransmitted. If a positive acknowledgement is received, then the next
frame is sent.
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window, and so is also
called sliding window protocol. The frames are sequentially numbered and a finite number of
frames are sent. If the acknowledgement of a frame is not received within the time period, all
frames starting from that frame are retransmitted.
Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost frames are
retransmitted, while the good frames are received and buffered.
Elementary Data Link protocols are classified into three categories, as given below −
 Protocol 1 − Unrestricted simplex protocol
 Protocol 2 − Simplex stop and wait protocol
 Protocol 3 − Simplex protocol for noisy channels.
UNRESTRICTED SIMPLEX PROTOCOL
Data transmitting is carried out in one direction only. The transmission (Tx) and receiving
(Rx) are always ready and the processing time can be ignored. In this protocol, infinite buffer
space is available, and no errors are occurring that is no damage frames and no lost frames.
The Unrestricted Simplex Protocol is diagrammatically represented as follows −
SIMPLEX STOP AND WAIT PROTOCOL FOR AN ERROR FREE CHANNEL
In this protocol we assume that data is transmitted in one direction only. No error occurs; the
receiver can only process the received information at finite rate. These assumptions imply
that the transmitter cannot send frames at rate faster than the receiver can process them.
The main problem here is how to prevent the sender from flooding the receiver. The general
solution for this problem is to have the receiver send some sort of feedback to sender, the
process is as follows −
Step1 − The receiver sends the acknowledgement frame back to the sender telling the sender
that the last received frame has been processed and passed to the host.
Step 2 − Permission to send the next frame is granted.
Step 3 − The sender after sending the sent frame has to wait for an acknowledge frame from
the receiver before sending another frame.
This protocol is called Simplex Stop and wait protocol, the sender sends one frame and waits
for feedback from the receiver. When the ACK arrives, the sender sends the next frame.
The Simplex Stop and Wait Protocol is diagrammatically represented as follows −
SIMPLEX STOP AND WAIT PROTOCOL FOR NOISY CHANNEL
Data transfer is only in one direction, consider separate sender and receiver, finite processing
capacity and speed at the receiver, since it is a noisy channel, errors in data frames or
acknowledgement frames are expected. Every frame has a unique sequence number.
After a frame has been transmitted, the timer is started for a finite time. Before the timer
expires, if the acknowledgement is not received, the frame gets retransmitted, when the
acknowledgement gets corrupted or sent data frames gets damaged, how long the sender
should wait to transmit the next frame is infinite.
The Simplex Protocol for Noisy Channel is diagrammatically represented as follows −
SLIDING WINDOW PROTOCOLS
The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is
needed. It is also used in TCP (Transmission Control Protocol).
In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window
technique is to avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol


1) A One-Bit Sliding Window Protocol
2) Go-Back-N ARQ
3) Selective Repeat ARQ
A ONE-BIT SLIDING WINDOW PROTOCOL
Sliding window protocols are data link layer protocols for reliable and sequential delivery of
data frames. The sliding window is also used in Transmission Control Protocol. In these
protocols, the sender has a buffer called the sending window and the receiver has buffer
called the receiving window.
In one – bit sliding window protocol, the size of the window is 1. So the sender transmits a
frame, waits for its acknowledgment, then transmits the next frame. Thus it uses the concept
of stop and waits for the protocol. This protocol provides for full – duplex communications.
Hence, the acknowledgment is attached along with the next data frame to be sent by
piggybacking.
Working Principle
The data frames to be transmitted additionally have an acknowledgment field, Ack field that
is of a few bits’ length. The Ack field contains the sequence number of the last frame received
without error. If this sequence number matches with the sequence number of the frame to be
sent, then it is inferred that there is no error and the frame is transmitted. Otherwise, it is
inferred that there is an error in the frame and the previous frame is retransmitted.
Since this is a bi-directional protocol, the same algorithm applies to both the communicating
parties.
Illustrative Example
The following diagram depicts a scenario with sequence numbers 0, 1, 2, 3, 0, 1, 2 and so on.
It depicts the sliding windows in the sending and the receiving stations during frame
transmission.
A PROTOCOL USING GO-BACK-N
Before understanding the working of Go-Back-N ARQ, we first look at the sliding window
protocol. As we know that the sliding window protocol is different from the stop-and-wait
protocol.
In the stop-and-wait protocol, the sender can send only one frame at a time and cannot send
the next frame without receiving the acknowledgment of the previously sent frame, whereas,
in the case of sliding window protocol, the multiple frames can be sent at a time.
What is Go-Back-N ARQ
In Go-Back-N ARQ, N is the sender's window size. Suppose we say that Go-Back-3, which
means that the three frames can be sent at a time before expecting the acknowledgment from
the receiver.
It uses the principle of protocol pipelining in which the multiple frames can be sent before
receiving the acknowledgment of the first frame. If we have five frames and the concept is
Go-Back-3, which means that the three frames can be sent, i.e., frame no 1, frame no 2, frame
no 3 can be sent before expecting the acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the
multiple frames at a time that requires the numbering approach to distinguish the frame from
another frame, and these numbers are known as the sequential numbers.
The number of frames that can be sent at a time totally depends on the size of the sender's
window. So, we can say that 'N' is the number of frames that can be sent at a time before
receiving the acknowledgment from the receiver.
If the acknowledgment of a frame is not received within an agreed-upon time period, then all
the frames available in the current window will be retransmitted. Suppose we have sent the
frame no 5, but we didn't receive the acknowledgment of frame no 5, and the current window
is holding three frames, then these three frames will be retransmitted.
The sequence number of the outbound frames depends upon the size of the sender's window.
Suppose the sender's window size is 2, and we have ten frames to send, then the sequence
numbers will not be 1,2,3,4,5,6,7,8,9,10. Let's understand through an example.
N is the sender's window size.
If the size of the sender's window is 4 then the sequence number will be 0,1,2,3,0,1,2,3,0,1,2,
and so on.
The number of bits in the sequence number is 2 to generate the binary sequence 00,01,10,11.
Working of Go-Back-N ARQ
Suppose there are a sender and a receiver, and let's assume that there are 11 frames to be sent.
These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence numbers of
the frames. Mainly, the sequence number is decided by the sender's window size. But, for the
better understanding, we took the running sequence numbers, i.e., 0,1,2,3,4,5,6,7,8,9,10. Let's
consider the window size as 4, which means that the four frames can be sent at a time before
expecting the acknowledgment of the first frame.
Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and now
the sender is expected to receive the acknowledgment of the 0th frame.

Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the receiver
has successfully received it.

The sender will then send the next frame, i.e., 4, and the window slides containing four
frames (1,2,3,4).
The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window will
slide having four frames (2,3,4,5).

Now, let's assume that the receiver is not acknowledging the frame no 2, either the frame is
lost, or the acknowledgment is lost. Instead of sending the frame no 6, the sender Go-Back to
2, which is the first frame of the current window, retransmits all the frames in the current
window, i.e., 2,3,4,5.
Important points related to Go-Back-N ARQ:
o In Go-Back-N, N determines the sender's window size, and the size of the receiver's
window is always 1.
o It does not consider the corrupted frames and simply discards them.
o It does not accept the frames which are out of order and discards them.
o If the sender does not receive the acknowledgment, it leads to the retransmission of all
the current window frames.
PROTOCOL USING SELECTIVE REPEAT
In Go-Back-N ARQ, the receiver keeps track of only one variable, and there is no
need to buffer out-of- order frames; they are simply discarded. However, this
protocol is very inefficient for a noisy link.
For noisy links, there is another mechanism that does not resend N frames when
just one frame is damaged; only the damaged frame is resent. This mechanism is
called Selective Repeat ARQ.
It is more efficient for noisy links, but the processing at the receiver is more
complex.
Sender Window (explain go-back N sender window concept (before & after sliding.)
The only difference in sender window between Go-back N and Selective Repeat is
Window size)

Receiver window
The receiver window in Selective Repeat is totally different from the one in Go
Back-N. First, the size of the receive window is the same as the size of the
send window (2m-1).
Because the sizes of the send window and receive window are the same, all
the frames in the send frame can arrive out of order and be stored until they can be
delivered.
However, the receiver never delivers packets out of order to the network layer. Those
slots inside the window that are colored define frames that have arrived out of order
and are waiting for their neighbors to arrive before delivery to the network layer.
In Selective Repeat ARQ, the size of the sender and receiver window must be at
most one-half of 2m
EXAMPLE DATA LINK PROTOCOLS
Data Link Layer protocols are generally responsible to simply ensure and confirm that the
bits and bytes that are received are identical to the bits and bytes being transferred. It is
basically a set of specifications that are used for implementation of data link layer just above
the physical layer of the Open System Interconnections (OSI) Model.
There are various data link protocols that are required for Wide Area Network (WAN) and
modem connections. Logical Link Control (LLC) is a data link protocol of Local Area
Network (LAN). Some of data link protocols are given below:

SynchronousDataLinkProtocol(SDLC)
SDLC is basically a communication protocol of computer. It usually supports multipoint
links even error recovery or error correction also. It is usually used to carry SNA (Systems
Network Architecture) traffic and is present precursor to HDLC. It is also designed and
developed by IBM in 1975. It is also used to connect all of the remote devices to mainframe
computers at central locations may be in point-to-point (one-to-one) or point-to-multipoint
(one-to-many) connections. It is also used to make sure that the data units should arrive
correctly and with right flow from one network point to next network point.
High-LevelDataLinkProtocol(HDLC)
HDLC is basically a protocol that is now assumed to be an umbrella under which many Wide
Area protocols sit. It is also adopted as a part of X.25 network. It was originally created and
developed by ISO in 1979. This protocol is generally based on SDLC. It also provides best-
effort unreliable service and also reliable service. HDLC is a bit-oriented protocol that is
applicable for point-to-point and multipoint communications both.

SerialLineInterfaceProtocol(SLIP)
SLIP is generally an older protocol that is just used to add a framing byte at end of IP packet.
It is basically a data link control facility that is required for transferring IP packets usually
among Internet Service Providers (ISP) and a home user over a dial-up link.
It is an encapsulation of the TCP/IP especially designed to work with over serial ports and
several router connections simply for communication. It is some limitations like it does not
provide mechanisms such as error correction or error detection.
PointtoPointProtocol(PPP)
PPP is a protocol that is basically used to provide same functionality as SLIP. It is most
robust protocol that is used to transport other types of packets also along with IP Packets. It
can also be required for dial-up and leased router-router lines. It basically provides framing
method to describe frames.
It is a character-oriented protocol that is also used for error detection. It is also used to
provides two protocols i.e. NCP and LCP.
LinkControlProtocol(LCP)
It was originally developed and created by IEEE 802.2. It is also used to provide HDLC style
services on LAN (Local Area Network). LCP is basically a PPP protocol that is used for
establishing, configuring, testing, maintenance, and ending or terminating links for
transmission of data frames.
LinkAccessProcedure(LAP)
LAP protocols are basically a data link layer protocols that are required for framing and
transferring data across point-to-point links. It also includes some reliability service features.
There are basically three types of LAP i.e. LAPB (Link Access Procedure Balanced), LAPD
(Link Access Procedure D-Channel), and LAPF (Link Access Procedure Frame-Mode Bearer
Services). It is actually originated from IBM SDLC, which is being submitted by IBM to the
ISP simply for standardization.
NetworkControlProtocol(NCP)
NCP was also an older protocol that was implemented by ARPANET. It basically allows
users to have access to use computers and some of the devices at remote locations and also to
transfer files among two or more computers. It is generally a set of protocols that is forming a
part of PPP. NCP is always available for each and every higher-layer protocol that is
supported by PPP. NCP was replaced by TCP/IP in the 1980s.

THE MEDIUM ACCESS SUB LAYER


To coordinate the access to the channel, multiple access protocols are requiring. All these
protocols belong to the MAC sub layer. Data Link layer is divided into two sub layers:
1. Logical Link Control (LLC)- is responsible for error control & flow control.
2. Medium Access Control (MAC)- MAC is responsible for multiple access resolutions
The following diagram depicts the position of the MAC layer

Functions of MAC Layer


It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
It is responsible for encapsulating frames so that they are suitable for transmission via the
physical medium.
It resolves the addressing of source station as well as the destination station, or groups of
destination stations.
It performs multiple access resolutions when more than one data frame is to be transmitted. It
determines the channel access methods for transmission.
It also performs collision resolution and initiating retransmission in case of collisions.
It generates the frame check sequences and thus contributes to protection against transmission
errors.
MAC Addresses
MAC address or media access control address is a unique identifier allotted to a network
interface controller (NIC) of a device. It is used as a network address for data transmission
within a network segment like Ethernet, Wi-Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired
or hard-coded in the network interface card (NIC). A MAC address comprises of six groups
of two hexadecimal digits, separated by hyphens, colons, or no separators. An example of a
MAC address is 00:0A: 89:5B: F0:11.

THE CHANNEL ALLOCATION PROBLEM


When there is more than one user who desire to access a shared network channel, an
algorithm is deployed for channel allocation among the competing users. The network
channel may be a single cable or optical fiber connecting multiple nodes, or a portion of the
wireless spectrum.
Channel allocation algorithms allocate the wired channels and bandwidths to the users, who
may be base stations, access points or terminal equipment. In broadcast networks, single
channel is shared by several stations. This channel can be allocated to only one transmitting
user at a time. There are two different methods of channel allocations:
1) Static Channel Allocation- a single channel is divided among various users either on the
basis of frequency (FDM) or on the basis of time (TDM). In FDM, fixed frequency is
assigned to each user, whereas, in TDM, fixed time slot is assigned to each user.
2. Dynamic Channel Allocation- no user is assigned fixed frequency or fixed time slot. All
users are dynamically assigned frequency or time slot, depending upon the requirements of
the user

MULTIPLE ACCESS PROTOCOLS

The Data Link Layer is responsible for transmission of data between two nodes. Its main
functions are-
Data Link Control
Multiple Access Control
Data Link control
The data link control is responsible for reliable transmission of message over transmission
channel by using techniques like framing, error control and flow control. For Data link
control refer to – Stop and Wait ARQ
Multiple Access Control
If there is a dedicated link between the sender and the receiver then data link control layer is
sufficient, however if there is no dedicated link present then multiple stations can access the
channel simultaneously. Hence multiple access protocols are required to decrease collision
and avoid crosstalk.
For example, in a classroom full of students, when a teacher asks a question and all the
students (or stations) start answering simultaneously (send data at same time) then a lot of
chaos is created (data overlap or data lost) then it is the job of the teacher (multiple access
protocols) to manage the students and make them answer one at a time.
Thus, protocols are required for sharing data on non-dedicated channels. Multiple access
protocols can be subdivided further as

1. Random Access Protocol:


In this, all stations have same superiority that is no station has more priority than another
station. Any station can send data depending on medium’s state (idle or busy). It has two
features:
There is no fixed time for sending data
There is no fixed sequence of stations sending data
1. ALOHA
2. CSMA (Carrier Sense Multiple Access)
3. CSMA/CD (Carrier Sense Multiple Access with Collision Detection)
4. CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance)
ALOHA
ALOHA was developed at University of Hawaii in early 1970s by Norman Abramson. It was
used for ground based radio broadcasting. In this method, stations share a common channel.
When two stations transmit simultaneously, collision occurs and frames are lost.
It was designed for wireless LAN but is also applicable for shared medium. In this, multiple
stations can transmit data at the same time and can hence lead to collision and data being
garbled.
PURE ALOHA
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t
come within the allotted time, then the station waits for a random amount of time called back-
off time (Tb) and re-sends the data. Since different stations wait for different amount of time,
the probability of further collision decreases.
SLOTTED ALOHA:
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for
the next slot. This reduces the probability of collision.
CARRIER SENSE MULTIPLE ACCESS PROTOCOLS
Carrier Sense Multiple Access ensures fewer collisions as the station is required to first sense
the medium (for idle or busy) before transmitting data. If it is idle then it sends data,
otherwise it waits till the channel becomes idle.
However, there is still chance of collision in CSMA due to propagation delay. For example, if
station A wants to send data, it will first sense the medium. If it finds the channel idle, it will
start sending data.
However, by the time the first bit of data is transmitted (delayed due to propagation delay)
from station A, if station B requests to send data and senses the medium it will also find it
idle and will also send data. This will result in collision of data from station A and B.
CSMA access modes-
1-persistent: The node senses the channel, if idle it sends the data, otherwise it continuously
keeps on checking the medium for being idle and transmits unconditionally (with 1
probability) as soon as the channel gets idle.
Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it checks the
medium after a random amount of time (not continuously) and transmits when found idle.
P-persistent: The node senses the medium, if idle it sends the data with p probability. If the
data is not transmitted ((1-p) probability) then it waits for some time and checks the medium
again, now if it is found idle then it sends with p probability. This repeat continues until the
frame is sent. It is used in Wi-Fi and packet radio systems.
O-persistent: Superiority of nodes is decided beforehand and transmission occurs in that
order. If the medium is idle, node waits for its time slot to send data.
COLLISION FREE PROTOCOLS
In computer networks, when more than one station tries to transmit simultaneously via a
shared channel, the transmitted data is garbled. This event is called collision. The Medium
Access Control (MAC) layer of the OSI model is responsible for handling collision of
frames.
Collision – free protocols are devised so that collisions do not occur. Protocols like
CSMA/CD and CSMA/CA nullifies the possibility of collisions once the transmission
channel is acquired by any station.
However, collision can still occur during the contention period if more than one stations starts
to transmit at the same time. Collision – free protocols resolves collision in the contention
period and so the possibilities of collisions are eliminated.
Types of Collision – free Protocols
Bit – map Protocol
In bit map protocol, the contention period is divided into N slots, where N is the total number
of stations sharing the channel. If a station has a frame to send, it sets the corresponding bit in
the slot. So, before transmission, each station knows whether the other stations want to
transmit. Collisions are avoided by mutual agreement among the contending stations on who
gets the channel.
Binary Countdown
This protocol overcomes the overhead of 1 bit per station of the bit – map protocol. Here,
binary addresses of equal lengths are assigned to each station. For example, if there are 6
stations, they may be assigned the binary addresses 001, 010, 011, 100, 101 and 110. All
stations wanting to communicate broadcast their addresses. The station with higher address
gets the higher priority for transmitting.
Limited Contention Protocols
These protocols combine the advantages of collision based protocols and collision free
protocols. Under light load, they behave like ALOHA scheme. Under heavy load, they
behave like bitmap protocols.

Adaptive Tree Walk Protocol


In adaptive tree walk protocol, the stations or nodes are arranged in the form of a binary tree
as follows -
Initially all nodes (A, B ……. G, H) are permitted to compete for the channel. If a node is
successful in acquiring the channel, it transmits its frame. In case of collision, the nodes are
divided into two groups (A, B, C, D in one group and E, F, G, H in another group). Nodes
belonging to only one of them is permitted for competing. This process continues until
successful transmission occurs.

WIRELESS LANS

WLAN stands for Wireless Local Area Network. WLAN is a local area network that uses
radio communication to provide mobility to the network users while maintaining the
connectivity to the wired network. A WLAN basically, extends a wired local area network.
WLAN’s are built by attaching a device called the access point(AP) to the edge of the wired
network. Clients communicate with the AP using a wireless network adapter which is similar
in function to an Ethernet adapter. It is also called a LAWN is a Local area wireless network.
HISTORY
A professor at the University of Hawaii whose name was Norman Abramson, developed the
world’s first wireless computer communication network. In 1979, Gfeller and u. Bapst
published a paper in the IEE proceedings reporting an experimental wireless local area
network using diffused infrared communications. The first of the IEEE workshops on
Wireless LAN was held in 1991.
wireless LAN technology based on IEEE 802.11 standard. Its predecessor the IEEE 802.3,
commonly referred to as the Ethernet, is the most widely deployed member of the family.
IEEE 802.11 is commonly referred to as wireless Ethernet because of its close similarity with
the IEEE 802.3There are three media that can be used for transmission over wireless LANs.
Infrared, radio frequency and microwave.
Components of WLANs
The components of WLAN architecture as laid down in IEEE 802.11 are −
Stations (STA) − Stations comprises of all devices and equipment that are connected to the
wireless LAN. Each station has a wireless network interface controller. A station can be of
two types −
Wireless Access Point (WAP or AP)
Client
Basic Service Set (BSS) − A basic service set is a group of stations communicating at the
physical layer level. BSS can be of two categories −
Infrastructure BSS
Independent BSS
Extended Service Set (ESS) − It is a set of all connected BSS.
Distribution System (DS) − It connects access points in ESS.
Types of WLANS
WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad
hoc mode.
Infrastructure Mode − Mobile devices or clients connect to an access point (AP) that in turn
connects via a bridge to the LAN or Internet. The client transmits frames to other clients via
the AP.
Ad Hoc Mode − Clients transmit frames directly to each other in a peer-to-peer fashion.
Advantages of WLANs
They provide clutter-free homes, offices and other networked places.
The LANs are scalable in nature, i.e. devices may be added or removed from the network at
greater ease than wired LANs.
The system is portable within the network coverage. Access to the network is not bounded by
the length of the cables.
Installation and setup are much easier than wired counterparts.
The equipment and setup costs are reduced.
Disadvantages of WLANs
Since radio waves are used for communications, the signals are noisier with more
interference from nearby systems.
Greater care is needed for encrypting information. Also, they are more prone to errors. So,
they require greater bandwidth than the wired LANs.
WLANs are slower than wired LANs.
DATA LINK LAYER SWITCHING
Data link layer is the second layer of the Open System Interconnections (OSI) model whose
function is to divide the stream of bits from physical layer into data frames and transmit the
frames according to switching requirements.
Switching in data link layer is done by network devices called bridges.
BRIDGES
A data link layer bridge connects multiple LANs (local area networks) together to form a
larger LAN. This process of aggregating networks is called network bridging. A bridge
connects the different components so that they appear as parts of a single network.
The following diagram shows connection by a bridge −
When a user accesses the internet or another computer network outside their immediate
location, messages are sent through the network of transmission media. This technique of
transferring the information from one computer network to another network is known
as switching.
Switching in a computer network is achieved by using switches. A switch is a small hardware
device which is used to join multiple computers together with one local area network (LAN).
Network switches operate at layer 2 (Data link layer) in the OSI model.
Switching is transparent to the user and does not require any configuration in the home
network.
Switches are used to forward the packets based on MAC addresses.
A Switch is used to transfer the data only to the device that has been addressed. It verifies the
destination address to route the packet appropriately.
It is operated in full duplex mode.
Packet collision is minimum as it directly communicates between source and destination.
It does not broadcast the message as it works with limited bandwidth.
Why is Switching Concept required
Switching concept is developed because of the following reasons:
Bandwidth: It is defined as the maximum transfer rate of a cable. It is a very critical and
expensive resource. Therefore, switching techniques are used for the effective utilization of
the bandwidth of a network.
Collision: Collision is the effect that occurs when more than one device transmits the
message over the same physical media, and they collide with each other. To overcome this
problem, switching technology is implemented so that packets do not collide with each other.
Advantages of Switching:
Switch increases the bandwidth of the network.
It reduces the workload on individual PCs as it sends the information to only that device
which has been addressed.
It increases the overall performance of the network by reducing the traffic on the network.
There will be less frame collision as switch creates the collision domain for each connection.
Disadvantages of Switching:
A Switch is more expensive than network bridges.
A Switch cannot determine the network connectivity issues easily.
Proper designing and configuration of the switch are required to handle multicast packets.
UNIT – III
NETWORK LAYER
 Design issues
ROUTING ALGORITHMS
 shortest path routing
 Flooding
 Hierarchical routing
 Broadcast
 Multicast
 distance vector routing
 Congestion Control Algorithms
 Quality of Service
 Internetworking
 The Network layer in the internet

INTRODUCTION
The network Layer is the third layer in the OSI model of computer networks. Its main
function is to transfer network packets from the source to the destination. It is involved both
the source host and the destination host. Thus, the network layer is the lowest layer that deals
with end-to-end transmission.
To achieve its goals, the network layer must know about the topology of the network (i.e., the
set of all routers and links) and choose appropriate paths through it, even for large networks.
It must also take care when choosing routes to avoid overloading some of the communication
lines and routers while leaving others idle.
Features of Network Layer
If the packets are too large for delivery, they are fragmented i.e., broken down into smaller
packets.
It decides the route to be taken by the packets to travel from the source to the destination
among the multiple routes available in a network (also called routing).
The source and destination addresses are added to the data packets inside the network layer.
NETWORK LAYER DESIGN ISSUES

1. Store-and-forward packet switching


2. Services provided to transport layer
3. Implementation of connectionless service
4. Implementation of connection-oriented service
5. Comparison of virtual-circuit and datagram networks
1.Store-and-forward packet switching

A host with a packet to send transmits it to the nearest router, either on its own LAN or over a
point-to-point link to the ISP. The packet is stored there until it has fully arrived and the link
has finished its processing by verifying the checksum.
Then it is forwarded to the next router along the path until it reaches the destination host,
where it is delivered. This mechanism is store-and-forward packet switching.

2.Services provided to transport layer


The network layer provides services to the transport layer at the network layer/transport layer
interface. The services need to be carefully designed with the following goals in mind:
 Services independent of router technology.
 Transport layer shielded from number, type, topology of routers.
 Network addresses available to transport layer use uniform numbering plan–even
across LANs and WANs
3.Implementation of connectionless service

If connectionless service is offered, packets are injected into the network individually
and routed independently of each other. No advance setup is needed. In this context, the
packets are frequently called datagrams (in analogy with telegrams) and the network is
called a datagram network.
A’s table (initially) A’s table (later) C’s Table E’s Table

Let us assume for this example that the message is four times longer than the maximum
packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send
each of them in turn to router A.
Every router has an internal table telling it where to send packets for each of the possible
destinations. Each table entry is a pair (destination and the outgoing line). Only directly
connected lines can be used.
A’s initial routing table is shown in the figure under the label ‘‘initially.’’
At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link. Then each
packet is forwarded according to A’s table, onto the outgoing link to C within a new frame.
Packet 1 is then forwarded to E and then to F.
However, something different happens to packet 4. When it gets to A it is sent to router B,
even though it is also destined for F. For some reason (traffic jam along ACE path), A
decided to send packet 4 via a different route than that of the first three packets. Router A
updated its routing table, as shown under the label ‘‘later.’’
The algorithm that manages the tables and makes the routing decisions is called the routing
algorithm.
4.Implementation of connection-oriented service

A’s table C’s Table E’s Table

If connection-oriented service is used, a path from the source router all the way to the
destination router must be established before any data packets can be sent. This connection is
called a VC (virtual circuit), and the network is called a virtual-circuit network.

When a connection is established, a route from the source machine to the destination
machine is chosen as part of the connection setup and stored in tables inside the routers. That
route is used for all traffic flowing over the connection, exactly the same way that the
telephone system works.
When the connection is released, the virtual circuit is also terminated. With connection-
oriented service, each packet carries an identifier telling which virtual circuit it belongs to.
As an example, consider the situation shown in Figure. Here, host H1 has established
connection 1 with host H2. This connection is remembered as the first entry in each of the
routing tables.
The first line of A’s table says that if a packet bearing connection identifier 1 comes in from
H1, it is to be sent to router C and given connection identifier 1. Similarly the first entry at C
routes the packet to E, also with connection identifier 1.
Now let us consider what happens if H3 also wants to establish a connection to H2. It chooses
connection identifier 1 (because it is initiating the connection and this is its only connection)
and tells the network to establish the virtual circuit.
This leads to the second row in the tables. Note that we have a conflict here because although
A can easily distinguish connection 1 packets from H1 from connection 1 packets from H3, C
cannot do this.
For this reason, A assigns a different connection identifier to the outgoing traffic for the
second connection. Avoiding conflicts of this kind is why routers need the ability to replace
connection identifiers in outgoing packets.
In some contexts, this process is called label switching. An example of a connection-oriented
network service is MPLS (Multi-Protocol Label Switching).
5.Comparison of virtual-circuit and datagram networks

ROUTING ALGORITHMS
The main function of NL (Network Layer) is routing packets from the source machine to
the destination machine. The routing algorithm is that part of the network layer software
responsible for deciding which output line an incoming packet should be transmitted on.
If the network uses datagrams internally, this decision must be made anew for every arriving
data packet since the best route may have changed since last time.
If the network uses virtual circuits internally, routing decisions are made only when a new
virtual circuit is being set up. Thereafter, data packets just follow the already established
route.
The latter case is sometimes called session routing because a route remains in force for an
entire session (e.g., while logged in over a VPN).
There are two processes inside router:
 One of them handles each packet as it arrives, looking up the outgoing line to use
for it in the routing table. This process is forwarding.
 The other process is responsible for filling in and updating the routing tables. That is
where the routing algorithm comes into play. This process is routing.
Regardless of whether routes are chosen independently for each packet or only when new
connections are established, certain properties are desirable in a routing algorithm
correctness, simplicity, robustness, stability, fairness, optimality
Routing algorithms can be grouped into two major classes:
 nonadaptive (Static Routing)
 adaptive. (Dynamic Routing)
Nonadaptive algorithm do not base their routing decisions on measurements or estimates of
the current traffic and topology. Instead, the choice of the route to use to get from I to J is
computed in advance, off line, and downloaded to the routers when the network is booted.
This procedure is sometimes called static routing.
Adaptive algorithm in contrast, change their routing decisions to reflect changes in the
topology, and usually the traffic as well. Adaptive algorithms differ in
 Where they get their information (e.g., locally, from adjacent routers, or from all
routers),
 When they change the routes (e.g., every ∆T sec, when the load changes or
when the topology changes), and
 What metric is used for optimization (e.g., distance, number of hops, or estimated
transit time). This procedure is called dynamic routing
DIFFERENT ROUTING ALGORITHMS
1. Optimality principle
2. Shortest path algorithm
3. Flooding
4. Distance vector routing
5. Link state routing
6. Hierarchical Routing
THE OPTIMALITY PRINCIPLE
One can make a general statement about optimal routes without regard to network topology
or traffic. This statement is known as the optimality principle.
It states that if router J is on the optimal path from router I to router K, then the optimal path
from J to K also falls along the same
As a direct consequence of the optimality principle, we can see that the set of optimal routes
from all sources to a given destination form a tree rooted at the destination. Such a tree is
called a sink tree. The goal of all routing algorithms is to discover and use the sink trees for
all routers

(a) A network. (b) A sink tree for router B.

SHORTEST PATH ROUTING (DIJKSTRA’S)


The idea is to build a graph of the subnet, with each node of the graph representing a
router and each arc of the graph representing a communication line or link.
To choose a route between a given pair of routers, the algorithm just finds the shortest path
between them on the graph
1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to this
node and make it the first permanent node.
2. Examine each neighbor of the node that was the last permanent node.
3. Assign a cumulative cost to each node and make it tentative
4. Among the list of tentative nodes
 Find the node with the smallest cost and make it Permanent
 If a node can be reached from more than one route, then select the route with the
shortest cumulative cost.
5. Repeat steps 2 to 4 until every node becomes permanent
FLOODING
Flooding is a non-adaptive routing technique following this simple method: when a data
packet arrives at a router, it is sent to all the outgoing links except the one it has arrived on.
Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number
unless some measures are taken to damp the process.
A variation of flooding that is slightly more practical is selective flooding. In this algorithm
the routers do not send every incoming packet out on every line, only on those lines that are
going approximately in the right direction. Flooding is not practical in most applications.
For example, let us consider the network in the figure, having six routers that are connected
through transmission lines.

Using flooding technique −

An incoming packet to A, will be sent to B, C and D.


 B will send the packet to C and E.
 C will send the packet to B, D and F.
 D will send the packet to C and F.
 E will send the packet to F.
 F will send the packet to C and E.
TYPES OF FLOODING
Uncontrolled flooding − Here, each router unconditionally transmits the incoming data
packets to all its neighbours.
Controlled flooding − They use some methods to control the transmission of packets to the
neighbouring nodes. The two popular algorithms for controlled flooding are Sequence
Number Controlled Flooding (SNCF) and Reverse Path Forwarding (RPF).
Selective flooding − Here, the routers don't transmit the incoming packets only along those
paths which are heading towards approximately in the right direction, instead of every
available paths.
Advantages of Flooding
It is very simple to setup and implement, since a router may know only its neighbours.
It is extremely robust. Even in case of malfunctioning of a large number router, the packets
find a way to reach the destination.
The shortest path is always chosen by flooding.

HIERARCHICAL ROUTING
As networks grow in size, the router routing tables grow proportionally. Not only is router
memory consumed by ever-increasing tables, but more CPU time is needed to scan them and
more bandwidth is needed to send status reports about them.
At a certain point, the network may grow to the point where it is no longer feasible for every
router to have an entry for every other router, so the routing will have to be done
hierarchically, as it is in the telephone network.
When hierarchical routing is used, the routers are divided into what we will call regions. Each
router knows all the details about how to route packets to destinations within its own region
but knows nothing about the internal structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group
the regions into clusters, the clusters into zones, the zones into groups, and so on, until we run
out of names for aggregations
When a single network becomes very large, an interesting question is ‘‘how many
levels should the hierarchy have?’’
For example, consider a network with 720 routers. If there is no hierarchy, each router
needs 720 routing table entries.
If the network is partitioned into 24 regions of 30 routers each, each router needs 30
local entries plus 23 remote entries for a total of 53 entries.
If a three-level hierarchy is chosen, with 8 clusters each containing 9 regions of 10 routers,
each router needs 10 entries for local routers, 8 entries for routing to other regions within its
own cluster, and 7 entries for distant clusters, for a total of 25 entries Kamoun and Kleinrock
(1979) discovered that the optimal number of levels for an N router network is ln N,
requiring a total of e ln N entries per router

BROADCAST ROUTING
Broadcast routing plays a role, in computer networking and telecommunications. It involves
transmitting data, messages, or signals from one source to destinations within a network.
Unlike routing (one-to-one communication) or multicast routing (one-to-many

communication) broadcast routing ensures that information reaches all devices or nodes
within the network.
In broadcast routing, the network layer provides a service of delivering a packet sent from a
source node to all other nodes in the network; multicast routing enables a single source node
to send a copy of a packet to a subset of the other network nodes.
Mechanisms for Broadcast Routing
The mechanisms and protocols are employed to efficiently distribute data to multiple
recipients through broadcast routing. Here are some important methods:
Flooding: Flooding is an approach to broadcast routing. In this method, the sender
broadcasts the message to all connected devices, which then forwards it to their connected
devices and so on. This continues until the message reaches all intended recipients or a
predefined maximum number of hops is reached. However, flooding can lead to network
congestion and inefficiency.
Spanning Tree Protocol (STP): STP is utilized in Ethernet networks to prevent loops and
ensure broadcast routing. It establishes a tree structure that connects all devices, in the
network while avoiding paths. Reducing network congestion and avoiding broadcast
messages are the benefits of implementing this approach.
The Internet Group Management Protocol (IGMP): It is a communication protocol
utilized in IP networks to facilitate the management of group memberships. Its purpose is to
enable hosts to join or leave groups ensuring that only interested recipients receive the
multicast traffic. This not enhances network efficiency. Also prevents unnecessary data
transmission.

MULTICAST ROUTING
Multicast routing is a networking method for efficient distribution of one-to-many traffic. A
multicast source, such as a live video conference, sends traffic in one stream to a multicast
group. The multicast group contains receivers such as computers, devices, and IP phones.
Multicasting is a type of one-to-many and many-to-many communication as it allows
sender or senders to send data packets to multiple receivers at once across LANs or WANs.
This process helps in minimizing the data frame of the network because at once the data
can be received by multiple nodes.
Multicasting is considered as the special case of broadcasting as.it works in similar to
Broadcasting, but in Multicasting, the information is sent to the targeted or specific members
of the network.

Applications: Multicasting is used in many areas like:


Internet protocol (IP)
Streaming Media
It also supports video conferencing applications and webcasts.

DISTANCE VECTOR ROUTING


The distance vector routing algorithm is one of the most commonly used routing algorithms.
It is a distributed algorithm, meaning that it is run on each router in the network. The
algorithm works by each router sending updates to its neighbours about the best path to each
destination.
A distance-vector routing (DVR) protocol requires that a router inform its neighbors of
topology changes periodically. Historically known as the old ARPANET routing algorithm
(or known as Bellman-Ford algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table containing the
distance between itself and ALL possible destination nodes. Distances based on a chosen
metric, are computed using information from the neighbors’ distance vectors.
Distance Vector Algorithm
A router transmits its distance vector to each of its neighbours in a routing packet.
Each router receives and saves the most recently received distance vector from each of its
neighbours.
A router recalculates its distance vector when
It receives a distance vector from a neighbour containing different information than before.
It discovers that a link to a neighbour has gone down.

CONGESTION CONTROL ALGORITHMS


Too many packets present in (a part of) the network causes packet delay and loss that
degrades performance. This situation is called congestion.
The network and transport layers share the responsibility for handling congestion. Since
congestion occurs within the network, it is the network layer that directly experiences it and
must ultimately determine what to do with the excess packets.
However, the most effective way to control congestion is to reduce the load that the transport
layer is placing on the network. This requires the network and transport layers to work
together. In this chapter we will look at the network aspects of congestion.

When too much traffic is offered, congestion sets in and performance degrades sharply

Above Figure depicts the onset of congestion. When the number of packets hosts send into
the network is well within its carrying capacity, the number delivered is proportional to the
number sent. If twice as many are sent, twice as many are delivered.
However, as the offered load approaches the carrying capacity, bursts of traffic occasionally

fill up the buffers inside routers and some packets are lost. These lost packets consume some
of the capacity, so the number of delivered packets falls below the ideal curve. The network
is now congested. Unless the network is well designed, it may experience a congestion
collapse
The presence of congestion means that the load is (temporarily) greater than the resources (in
a part of the network) can handle. Two solutions come to mind: increase the resources or
decrease the load.
Congestion control algorithms
Congestion Control is a mechanism that controls the entry of data packets into the network,
enabling a better use of a shared network infrastructure and avoiding congestive collapse.
Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
QUALITY OF SERVICE
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions, or traffic aggregates. Basic phenomenon
for QoS means in terms of packet delay and losses of various kinds.
Need for QoS
Video and audio conferencing require bounded delay and loss rate.
Video and audio streaming requires bounded packet loss rate it may not be so sensitive to
delay.
Time-critical applications (real-time control) in which bounded delay is considered to be an
important factor.
Valuable applications should be provided better services than less valuable applications.
QoS Specification
QoS requirements can be specified as:
Delay
Delay Variation(Jitter)
Throughput
Error Rate
QOS PARAMETERS
Packet loss. This happens when network links become congested and routers and switches
start dropping packets. When packets are dropped during real-time communication, such as in
voice or video calls, these sessions can experience jitter and gaps in speech. Packets can be
dropped when a queue, or line of packets waiting to be sent, overflows.
Jitter. This is the result of network congestion, timing drift and route changes. Too much
jitter can degrade the quality of voice and video communication.
Latency. This is the time it takes a packet to travel from its source to its destination. Latency
should be as close to zero as possible. If a voice over IP call has a high amount of latency,
users can experience echo and overlapping audio.
Bandwidth. This is the capacity of a network communications link to transmit the maximum
amount of data from one point to another in a given amount of time. QoS optimizes the
network performance by managing bandwidth and giving high priority applications with
stricter performance requirements more resources than others.

INTERNETWORKING
Internetworking is combined of 2 words, inter and networking which implies an association
between totally different nodes or segments. This connection area unit is established through
intercessor devices akin to routers or gateway. The first term for associate degree
internetwork was catenet.
This interconnection is often among or between public, private, commercial, industrial, or
governmental networks. Thus, associate degree internetwork could be an assortment of
individual networks, connected by intermediate networking devices, that function as one
giant network.
Internetworking refers to the trade, products, and procedures that meet the challenge of
making and administering internet works.
To enable communication, every individual network node or phase is designed with a similar
protocol or communication logic, that is Transfer Control Protocol (TCP) or Internet Protocol
(IP).
Once a network communicates with another network having constant communication
procedures, it’s called Internetworking. Internetworking was designed to resolve the matter of
delivering a packet of information through many links.
There is a minute difference between extending the network and Internetworking. Merely
exploitation of either a switch or a hub to attach 2 local area networks is an extension of LAN
whereas connecting them via the router is an associate degree example of Internetworking.
Internetworking is enforced in Layer three (Network Layer) of the OSI-ISO model. The
foremost notable example of internetworking is the Internet.
There is chiefly 3 units of Internetworking:
Extranet
Intranet
Internet
Intranets and extranets might or might not have connections to the net. If there is a
connection to the net, the computer network or extranet area unit is usually shielded from
being accessed from the net if it is not authorized.
The net isn’t thought-about to be a section of the computer network or extranet, though it
should function as a portal for access to parts of the associate degree extranet.
Extranet: It’s a network of the internetwork that’s restricted in scope to one organization or
entity however that additionally has restricted connections to the networks of one or a lot of
different sometimes, however not essential.
It’s the very lowest level of Internetworking, usually enforced in an exceedingly personal
area. Associate degree extranet may additionally be classified as a Man, WAN, or different
form of network however it cannot encompass one local area network i.e. it should have a
minimum of one reference to associate degree external network.
Intranet – This associate degree computer network could be a set of interconnected
networks, which exploits the Internet Protocol and uses IP-based tools akin to web browsers
and FTP tools, that are underneath the management of one body entity. Internet – A selected
Internetworking, consisting of a worldwide interconnection of governmental, academic,
public, and personal networks based mostly upon the Advanced analysis comes Agency
Network (ARPANET) developed by ARPA of the U.S. Department of Defence additionally
home to the World Wide Web (WWW) and cited as the ‘Internet’ to differentiate from all
different generic Internetworks.
THE NETWORK LAYER IN THE INTERNET
The "network layer" is the part of the Internet communications process where these
connections occur, by sending packets of data back and forth between different networks. In
the 7-layer OSI model the network layer is layer 3.
The main functions performed by the network layer are:
Routing: When a packet reaches the router's input link, the router will move the packets to
the router's output link. For example, a packet from S1 to R1 must be forwarded to the next
router on the path to S2.
Logical Addressing: The data link layer implements the physical addressing and network
layer implements the logical addressing. Logical addressing is also used to distinguish
between source and destination system. The network layer adds a header to the packet which
includes the logical addresses of both the sender and the receiver.
Internetworking: This is the main role of the network layer that it provides the logical
connection between different types of networks.
Fragmentation: The fragmentation is a process of breaking the packets into the smallest
individual data units that travel through different networks.
UNIT – IV
TRANSPORT LAYER
 Transport Services
 Elements of Transport protocols
 Connection management
 TCP and UDP protocols
INTRODUCTION

The transport layer is Layer 4 of the Open Systems Interconnection (OSI) communications
model. It is responsible for ensuring that the data packets arrive accurately and reliably
between sender and receiver. The transport layer most often uses TCP or User Datagram
Protocol (UDP).
Together with the network layer the transport layer is the heart of the protocol hierarchy. The
network layer provides end-to-end packet delivery using datagrams or virtual circuits.
The transport layer builds on the network layer to provide data transport from a process on a
source machine to a process on a destination machine with a desired level of reliability that is
independent of the physical networks currently in use.
It provides the abstractions that applications need to use the network. Without the transport
layer, the whole concept of layered protocols would make little sense.
The transport layer builds on the network layer to provide data transport from a process on a
source machine to a process on a destination machine with a desired level of reliability that is
independent of the physical networks currently in use.
TRANSPORT SERVICES
The transport layer takes services from the Application layer and provides services to
the Network layer.
At the sender’s side: The transport layer receives data (message) from the Application layer
and then performs Segmentation, divides the actual message into segments, adds the source
and destination’s port numbers into the header of the segment, and transfers the message to
the Network layer.
At the receiver’s side: The transport layer receives data from the Network layer,
reassembles the segmented data, reads its header, identifies the port number, and forwards the
message to the appropriate port in the Application layer.
1.Services Provided to the Upper Layers
The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective data
transmission service to its users, normally processes in the application layer.
To achieve this, the transport layer makes use of the services provided by the network layer.
The software and/or hardware within the transport layer that does the work is called the
transport entity.

The network, transport, and application layers

The connection-oriented transport service. connections have three phases: establishment,


data transfer, and release.
Addressing and flow control
The connectionless transport service.
2.Transport Service Primitives
To see how these primitives might be used, consider an application with a server and a
number of remote clients. To start with, the server executes a LISTEN primitive, typically
by calling a library procedure that makes a system call that blocks the server until a client
turns up.
When a client wants to talk to the server, it executes a CONNECT primitive. The transport
entity carries out this primitive by blocking the caller and sending a packet to the server.
Encapsulated in the payload of this packet is a transport layer message for the server’s
transport entity.
3.Berkeley Sockets
Sockets were first released as part of the Berkeley UNIX 4.2BSD software distribution in
1983. They quickly became popular. The primitives are now widely used for Internet
programming on many operating systems, especially UNIX-based systems, and there is a
socket-style API for Windows called ‘‘Winsock.’’

A state diagram for a simple connection management scheme. Transitions labeled in italics
are caused by packet arrivals. The solid lines show the client's state sequence. The
dashed lines show the server's state sequence.

Primitive Meaning
SOCKET Create a new communication endpoint

BIND Associate a local address with a socket

LISTEN Announce willingness to accept connections; give queue size

ACCEPT Passively establish an incoming connection

CONNECT Actively attempt to establish a connection

SEND Send some data over the connection

RECEIVE Receive some data from the connection

CLOSE Release the connection

The first four primitives in the list are executed in that order by servers.
Two examples are SCTP (Stream Control Transmission Protocol) defined in RFC 4960
and SST (Structured Stream Transport) (Ford, 2007). These protocols must change the
socket API slightly to get the benefits of groups of related streams, and they also support
features such as a mix of connection-oriented and connectionless traffic and even multiple
network paths.
ELEMENTS OF TRANSPORT PROTOCOLS
To establish a reliable service between two machines on a network, transport protocols are
implemented, which somehow resembles the data link protocols implemented at layer 2. The
major difference lies in the fact that the data link layer uses a physical channel between two
routers while the transport layer uses a subnet.

(a) Environment of the data link layer.


(b) Environment of the transport layer.
Over point-to-point links such as wires or optical fiber, it is usually not necessary for a router
to specify which router it wants to talk to each outgoing line leads directly to a particular
router. In the transport layer, explicit addressing of destinations is required.
The process of establishing a connection over the wire of Fig(a) is simple: the other end is
always there (unless it has crashed, in which case it is not there). Either way, there is
not much to do.
Even on wireless links the process is not much different. Just sending a message is
sufficient to have it reach all other destinations. If the message is not acknowledged due
to an error, it can be resent. In the transport layer, initial connection establishment is
complicated, as we will see.
Elements of Transport Protocols are
1. Addressing
2. Connection Establishment
3. Connection Release
4. Flow Control and Buffering
5. Multiplexing
6. Crash Recovery
1) ADDRESSING
When an application (e.g., a user) process wishes to set up a connection to a remote
application process, it must specify which one to connect to. (Connectionless
transport has the same problem: to whom should each message be sent?)
The method normally used is to define transport addresses to which processes can listen for
connection requests. In the Internet, these endpoints are called ports.
We will use the generic term TSAP (Transport Service Access Point) to mean a specific
endpoint in the transport layer. The analogous endpoints in the network layer (i.e., network
layer addresses) are not-surprisingly called NSAPs (Network Service Access Points). IP
addresses are examples of NSAPs.
TSAPs, NSAPs and transport connections.

A possible scenario for a transport connection is as follows:


A mail server process attaches itself to TSAP 1522 on host 2 to wait for an incoming call. A
call such as our LISTEN might be used, for example.
An application process on host 1 wants to send an email message, so it attaches itself to
TSAP 1208 and issues a CONNECT request.
The request specifies TSAP 1208 on host 1 as the source and TSAP 1522 on host 2 as the
destination. This action ultimately results in a transport connection being established between
the application process and the server.
2. CONNECTION ESTABLISHMENT
Establishing a connection sounds easy, but it is actually surprisingly tricky. At first glance, it
would seem sufficient for one transport entity to just send a CONNECTION REQUEST
segment to the destination and wait for a CONNECTION ACCEPTED reply. The
problem occurs when the network can lose, delay, corrupt, and duplicate packets. To solve
this specific problem (DELAYED DUPLICATES) Tomlinson (1975) introduced the three-
way handshake.
This establishment protocol involves one peer checking with the other that the connection
request is indeed current. The normal setup procedure when host 1 initiates is shown in Fig.
(a).
Host 1 chooses a sequence number, x, and sends a CONNECTION REQUEST segment
containing it to host 2. Host 2 replies with an ACK segment acknowledging x and
announcing its own initial sequence number, y. Finally, host 1 acknowledges host 2’s choice
of an initial sequence number in the first data segment that it sends.

Three protocol scenarios for establishing a connection using a three-way handshake.CR


denotes CONNECTION REQUEST.
 Normal operation,
 Old CONNECTION REQUEST appearing out of nowhere.
 Duplicate CONNECTION REQUEST and duplicate ACK.
3. CONNECTION RELEASE
There are two styles of terminating a connection: asymmetric release and symmetric release
Asymmetric release is the way the telephone system works: when one party hangs up, the
connection is broken. Symmetric release treats the connection as two separate unidirectional
connections and requires each one to be released separately

Asymmetric release is abrupt and may result in data loss. Consider the scenario of Fig.
After the connection is established, host 1 sends a segment that arrives properly at host 2.
Then host 1 sends another segment. Unfortunately, host 2 issues a DISCONNECT before the
second segment arrives. The result is that the connection is released and data are lost.
Clearly, a more sophisticated release protocol is needed to avoid data loss. One way is to use
symmetric release, in which each direction is released independently of the other one. Here, a
host can continue to receive data even after it has sent a DISCONNECT segment.
Symmetric release does the job when each process has a fixed amount of data to send and
clearly knows when it has sent it. One can envision a protocol in which host 1 says ‘‘I am
done. Are you done too?’’ If host 2 responds: ‘‘I am done too. Goodbye, the connection can
be safely released.’’
Four protocol scenarios for releasing a connection. (a) Normal case of three-way
handshake. (b) Final ACK lost. (c) Response lost. (d) Response lost and subsequent DRs
lost.

4. ERROR CONTROL AND FLOW CONTROL


Error control is ensuring that the data is delivered with the desired level of reliability, usually

that all of the data is delivered without any errors. Flow control is keeping a fast transmitter
from overrunning a slow receiver.
1. A frame carries an error-detecting code (e.g., a CRC or checksum) that is used to
check if the information was correctly received.
2. A frame carries a sequence number to identify itself and is retransmitted by the sender
until it receives an acknowledgement of successful receipt from the receiver. This is
called ARQ (Automatic Repeat request).
3. There is a maximum number of frames that the sender will allow to be outstanding at
any time, pausing if the receiver is not acknowledging frames quickly enough. If this
maximum is one packet the protocol is called stop-and-wait. Larger windows enable
pipelining and improve performance on long, fast links.
4. The sliding window protocol combines these features and is also used to support
bidirectional data transfer.
5. MULTIPLEXING
Multiplexing, or sharing several conversations over connections, virtual circuits, and physical
links plays a role in several layers of the network architecture. In the transport layer, the need
for multiplexing can arise in a number of ways. Multiplexing: connections share a network
address
Inverse multiplexing: addresses share a connection
(a) Multiplexing. (b) Inverse multiplexing.

6. CRASH RECOVERY
In network communication crash recovery plays an important role in the transport layer
in ensuring the reliable exchange of data between endpoints. To achieve this reliability,
various mechanisms come into play, such as retransmission, timeout-based strategies,
selective repeat, and flow control.

CONNECTION MANAGEMENT
TCP/IP connections are requested by the client connection manager and accepted by the
server connection manager. The integration server process contains the connection manager,
which makes the connections.
Only one integration server can have TCP/IP server nodes using a specific port at any one
time.
The CONNECT primitive transmits a TCP segment with the SYN bit on and the ACK bit off
and waits for a response.

When the segment sent by Host-1 reaches the destination, i.e., host -2, the receiving server
checks to see if there is a process that has done a LISTEN on the port given in the destination
port field. If not, it sends a response with the RST bit on to refuse the connection. Otherwise,
it governs the TCP segment to the listing process, which can accept or decline (for example,
if it does not look similar to the client) the connection.
TCP AND UDP PROTOCOLS
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) both are protocols
of the Transport Layer. TCP is a connection-oriented protocol where as UDP is a part of the
Internet Protocol suite, referred to as the UDP/IP suite. Unlike TCP, it is an unreliable and
connectionless protocol.
TCP (Transmission Control Protocol) :
TCP is a layer 4 protocol which provides acknowledgement of the received packets and is
also reliable as it resends the lost packets. It is better than UDP but due to these features it has
an additional overhead. It is used by application protocols like HTTP and FTP.

Transmission Control Protocol


TCP (Transmission Control Protocol) is one of the main protocols of the Internet protocol
suite. It lies between the Application and Network Layers which are used in providing
reliable delivery services.
It is a connection-oriented protocol for communications that helps in the exchange of
messages between different devices over a network. The Internet Protocol (IP), which
establishes the technique for sending data packets between computers, works with TCP.
Features of TCP
TCP keeps track of the segments being transmitted or received by assigning numbers to every
single one of them.
Flow control limits the rate at which a sender transfers data. This is done to ensure reliable
delivery.
TCP implements an error control mechanism for reliable data transfer.
TCP takes into account the level of congestion in the network.
Advantages of TCP
It is reliable for maintaining a connection between Sender and Receiver.
It is responsible for sending data in a particular sequence.
Its operations are not dependent on OS.
It allows and supports many routing protocols.
It can reduce the speed of data based on the speed of the receiver.

Disadvantages of TCP
It is slower than UDP and it takes more bandwidth.
Slower upon starting of transfer of a file.
Not suitable for LAN and PAN Networks.
It does not have a multicast or broadcast category.
It does not load the whole page if a single data of the page is missing.
Applications of TCP
 Sending Emails
 Transferring Files
 Web Browsing
UDP (User Datagram Protocol) :
UDP is also a layer 4 protocol but unlike TCP it doesn’t provide acknowledgement of the
sent packets. Therefore, it isn’t reliable and depends on the higher layer protocols for the
same. But on the other hand it is simple, scalable and comes with lesser overhead as
compared to TCP. It is used in video and voice streaming.

User Datagram Protocol

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as the UDP/IP suite. Unlike TCP, it is an unreliable and
connectionless protocol. So, there is no need to establish a connection before data transfer.
The UDP helps to establish low-latency and loss-tolerating connections establish over the
network. The UDP enables process-to-process communication.
Features of UDP
Used for simple request-response communication when the size of data is less and hence
there is lesser concern about flow and error control.
It is a suitable protocol for multicasting as UDP supports packet switching. UDP is used for
some routing update protocols like RIP(Routing Information Protocol).
Advantages of UDP
It does not require any connection for sending or receiving data.
Broadcast and Multicast are available in UDP.
UDP can operate on a large range of networks.
UDP has live and real-time data.
UDP can deliver data if all the components of the data are not complete.
Disadvantages of UDP
We cannot have any way to acknowledge the successful transfer of data.
UDP cannot have the mechanism to track the sequence of data.
UDP is connectionless, and due to this, it is unreliable to transfer data.
In case of a Collision, UDP packets are dropped by Routers in comparison to TCP.
UDP can drop packets in case of detection of errors.
Applications of UDP
 Gaming
 Video Streaming
 Online Video Chats
Which Protocol is Better: TCP or UDP?
The answer to this question is difficult because it totally depends on what work we are doing
and what type of data is being delivered.
UDP is better in the case of online gaming as it allows us to work lag-free.
TCP is better if we are transferring data like photos, videos, etc. because it ensures that data
must be correct has to be sent.
UNIT – V
APPLICATION LAYER
 Domain Name System
 SNMP
 Electronic Mail
 The World WEB
 HTTP
 Streaming audio and video
INTRODUCTION
The application layer sits at Layer 7, the top of the Open Systems Interconnection (OSI)
communications model. It ensures an application can effectively communicate with other
applications on different computer systems and networks.
This layer provides several ways for manipulating the data (information) which actually
enables any type of user to access network with ease.
This layer also makes a request to its bottom layer, which is presentation layer for receiving
various types of information from it.
The Application Layer interface directly interacts with application and provides common
web application services. This layer is basically highest level of open system, which
provides services directly for application process.
Functions of Application Layer :
Data from User <=> Application layer <=> Data from Presentation Layer
Application Layer provides a facility by which users can forward several emails and it also
provides a storage facility.
This layer allows users to access, retrieve and manage files in a remote computer.
In this layer, data is in visual form, which makes users truly understand data rather than
remembering or visualize the data in the binary format (0’s or 1’s).
This application layer basically interacts with Operating System (OS) and thus further
preserves the data in a suitable manner.
This layer also receives and preserves data from its previous layer, which is Presentation
Layer (which carries in itself the syntax and semantics of the information transmitted).
The protocols which are used in this application layer depend upon what information users
wish to send or receive.
This application layer, in general, performs host initialization followed by remote login to
hosts.
APPLICATION LAYER PROTOCOLS:
The application layer provides several protocols which allow any software to easily send and
receive information and present meaningful data to its users.
The following are some of the protocols which are provided by the application layer.
TELNET: Telnet stands for Telecommunications Network. This protocol is used for
managing files over the Internet. It allows the Telnet clients to access the resources of Telnet
server. Telnet uses port number 23.
DNS: DNS stands for Domain Name System. The DNS service translates the domain name
(selected by user) into the corresponding IP address. For example- If you choose the domain
name as www.abcd.com, then DNS must translate it as 192.36.20.8 (random IP address
written just for understanding purposes). DNS protocol uses the port number 53.
DHCP: DHCP stands for Dynamic Host Configuration Protocol. It provides IP addresses to
hosts. Whenever a host tries to register for an IP address with the DHCP server, DHCP server
provides lots of information to the corresponding host. DHCP uses port numbers 67 and 68.
FTP: FTP stands for File Transfer Protocol. This protocol helps to transfer different files
from one device to another. FTP promotes sharing of files via remote computer devices with
reliable, efficient data transfer. FTP uses port number 20 for data access and port number 21
for data control.
SMTP: SMTP stands for Simple Mail Transfer Protocol. It is used to transfer electronic mail
from one user to another user. SMTP is used by end users to send emails with ease. SMTP
uses port numbers 25 and 587.
HTTP: HTTP stands for Hyper Text Transfer Protocol. It is the foundation of the World
Wide Web (WWW). HTTP works on the client server model. This protocol is used for
transmitting hypermedia documents like HTML. This protocol was designed particularly for
the communications between the web browsers and web servers, but this protocol can also be
used for several other purposes. HTTP is a stateless protocol (network protocol in which a
client sends requests to server and server responses back as per the given state), which means
the server is not responsible for maintaining the previous client’s requests. HTTP uses port
number 80.
NFS: NFS stands for Network File System. This protocol allows remote hosts to mount files
over a network and interact with those file systems as though they are mounted locally. NFS
uses the port number 2049.
SNMP: SNMP stands for Simple Network Management Protocol. This protocol gathers data
by polling the devices from the network to the management station at fixed or random
intervals, requiring them to disclose certain information. SNMP uses port numbers 161 (TCP)
and 162 (UDP).

DOMAIN NAME SYSTEM(DNS)


To identify an entity, TCP/IP protocols use the IP address, which uniquely identifies the
connection of a host to the Internet. However, people prefer to use names instead of numeric
addresses. Therefore, we need a system that can map a name to an address or an address to a
name.
NAME SPACE
A name space that maps each address to a unique name can be organized in two ways: fiat or
hierarchical.

Flat Name Space


In a flat name space, a name is assigned to an address. A name in this space is a
sequence of characters without structure.
Hierarchical Name Space
In a hierarchical name space, each name is made of several parts. The first part can define the
nature of the organization, the second part can define the name of an organization, the third
part can define departments in the organization, and so on.
Example: challenger.jhda.edu, challenger.berkeley.edu, and challenger.smart.com
DOMAIN NAME SPACE
To have a hierarchical name space, a domain name space was designed. In this design the
names are defined in an inverted-tree structure with the root at the top. The tree can have only
128 levels: level 0 (root) to level 127.

Domain
Space

LABEL
Each node in the tree has a label, which is a string with a maximum of 63 characters. The root
label is a null string (empty string). DNS requires that children of a node (nodes that branch
from the same node) have different labels, which guarantees the uniqueness of the domain
names.
DOMAIN NAME
Each node in the tree has a domain name. A full domain name is a sequence of labels
separated by dots (.). The domain names are always read from the node up to the root.
The last label is the label of the root (null).
This means that a full domain name always ends in a null label, which means the
last character is a dot because the null string is nothing. Below Figure shows some
domain names

Domain names and labels


DOMAIN
A domain is a subtree of the domain name space. The name of the domain is the domain
name of the node at the top of the subtree.

DISTRIBUTION OF NAME SPACE:

The information contained in the domain name space must be stored. However, it is very
inefficient and also unreliable to have just one computer store such a huge amount of
information. In this section, we discuss the distribution of the domain name space
1.Hierarchy of Name Servers
distribute the information among many computers called DNS servers. we let the
root stand alone and create as many domains (subtrees) as there are first-level
nodes
2.Zone
Since the complete domain name hierarchy cannot be stored on a single server, it is divided
among many servers. What a server is responsible for or has authority over is called a zone.
We can define a zone as a contiguous part of the entire tree

3.Root Server
A root server is a server whose zone consists of the whole tree. A root server usually does not
store any information about domains but delegates its authority to other servers, keeping
references to those servers. There are several root servers, each covering the whole domain
name space. The servers are distributed all around the world.
4.Primary and Secondary Servers
A primary server is a server that stores a file about the zone for which it is an authority. It is
responsible for creating, maintaining, and updating the zone file. It stores the zone file on a
local disk
A secondary server is a server that transfers the complete information about a zone from
another server (primary or secondary) and stores the file on its local disk. The secondary
server neither creates nor updates the zone files.

DNS IN THE INTERNET


DNS is a protocol that can be used in different platforms. In the Internet, the domain name
space (tree) is divided into three different sections: generic domains, country domains, and
the inverse domain
1) Generic Domains
The generic domains define registered hosts according to their generic behavior. Each node
in the tree defines a domain, which is an index to the domain name space database
2) Country Domains
The country domains section uses two-character country abbreviations (e.g., us for United
States). Second labels can be organizational, or they can be more specific, national
designations. The United States, for example, uses state abbreviations as a subdivision of us
(e.g., ca.us.).

3) Inverse Domain
The inverse domain is used to map an address to a name.

SNMP
Simple Network Management Protocol (SNMP) is an application-layer protocol that
transmits management data between network devices. SNMP belongs to the Transmission
Control Protocol/Internet Protocol (TCP/IP) family.
If an organization has 1000 devices then to check all devices, one by one every day, are
working properly or not is a hectic task. To ease these up, a Simple Network Management
Protocol (SNMP) is used.
SNMP is an application layer protocol that uses UDP port number 161/162.SNMP is used to
monitor the network, detect network faults, and sometimes even to configure remote devices.
Components of SNMP
There are mainly three components of SNMP:
SNMP Manager
It is a centralized system used to monitor the network. It is also known as a Network
Management Station (NMS). A router that runs the SNMP server program is called an agent,
while a host that runs the SNMP client program is called a manager.
SNMP agent
It is a software management software module installed on a managed device. The manager
accesses the values stored in the database, whereas the agent maintains the information in the
database. To ascertain if the router is congested or not, for instance, a manager can examine
the relevant variables that a router stores, such as the quantity of packets received and
transmitted.
Management Information Base
MIB consists of information on resources that are to be managed. This information is
organized hierarchically. It consists of objects instances which are essentially variables.

You might also like