3.
2 Physical and Data Link Layer Services
Key Concepts:
The Physical Layer and Data Link Layer are foundational to ensuring reliable communication across a
network.
Physical Layer Services:
The Physical Layer is responsible for the transmission of raw bits over a transmission medium. It
converts digital data into electrical signals (or light signals, depending on the medium).
It provides an electrical, mechanical, and functional interface to the transmission medium. The
physical layer does not understand data semantics or structure; it only transmits the raw bit stream.
Key Services Provided by the Physical Layer:
Transmission of Raw Bits: Transmits bits in the form of signals over various types of transmission
mediums like coaxial cables, fiber optics, etc.
Signal Encoding and Modulation: Transforms data into a form suitable for the transmission
medium (e.g., electrical pulses or optical signals).
Bit Synchronization: Ensures the receiver can synchronize with the sender to understand the
timing of the incoming bits.
Error Detection: The physical layer can detect certain errors caused by noise or signal degradation
in the transmission medium (e.g., lost bits, incorrect signal levels).
Data Link Layer Services:
The Data Link Layer provides a reliable data transfer mechanism over the unreliable Physical Layer. It
is responsible for error detection and correction, flow control, and encapsulation of packets into frames.
Key Services Provided by the Data Link Layer:
Framing: The Data Link Layer breaks the data into manageable frames. These frames contain both
the payload (data) and control information, such as error-detection codes.
Flow Control: It prevents the fast sender from overwhelming the slower receiver by controlling the
rate at which data is sent.
Error Detection and Correction: It uses techniques like parity bits, checksums, and Cyclic
Redundancy Check (CRC) to detect and correct errors that occur during transmission.
Physical Addressing (MAC): The Data Link Layer assigns a MAC address (hardware address) to
devices for identification on the local network.
Medium Access Control (MAC): The Data Link Layer defines how devices share access to the
transmission medium, using protocols like CSMA/CD, Polling, and Token Passing.
3.3 Error Detection and Correction
Error Detection:
Errors during data transmission are inevitable, due to various factors like signal interference, noise,
and channel impairments. The objective of error detection is to detect when the data has been
corrupted during transmission.
Error Detection Methods:
1. Parity Bit:
A parity bit is a binary digit added to a data stream to ensure the number of 1s in the stream is
either even or odd.
Even Parity: If the number of 1s in the data is even, a parity bit of 0 is added. If it is odd, a parity
bit of 1 is added.
Odd Parity: Similarly, if the number of 1s is odd, a 0 is added; if even, a 1 is added.
Limitations: Parity can only detect single-bit errors or odd numbers of errors. It cannot
detect multiple errors or pinpoint which bit is corrupted.
2. Cyclic Redundancy Check (CRC):
CRC is a powerful method for detecting errors, especially burst errors (multiple consecutive
corrupted bits).
The sender generates a CRC code by dividing the data by a predefined generator polynomial.
The remainder of the division is appended to the data. The receiver performs the same division
and compares the remainder. If the remainder is non-zero, an error has occurred.
CRC is widely used in networking technologies like Ethernet and Wi-Fi.
3. Checksum:
A checksum is the sum of all data units (e.g., bytes) in the message. The sender computes the
checksum, and the receiver computes it again.
If both checksums match, the data is assumed to be correct. If they don't, an error has occurred.
Error Correction:
Once an error is detected, correction mechanisms are used to fix it, ensuring data integrity.
1. Automatic Repeat Request (ARQ):
ARQ is an error control method where data is retransmitted when errors are detected. The
receiver sends an ACK (acknowledgment) for successfully received data, or a NAK (negative
acknowledgment) when errors are detected.
Types of ARQ:
Stop-and-Wait ARQ: The sender waits for an acknowledgment after sending a frame before
sending the next one. Simple but inefficient for high-latency or high-speed links.
Go-Back-N ARQ: The sender sends multiple frames without waiting for acknowledgment, but
if an error is detected, it must retransmit the current frame and all subsequent frames.
Selective Repeat ARQ: Only the erroneous frames are retransmitted, rather than all frames.
This reduces retransmission and improves throughput.
2. Error-Correcting Codes:
Hamming Code: A block code that adds extra bits to data to allow the detection and correction
of errors. It can correct single-bit errors and detect double-bit errors.
Convolutional Codes: Used in streaming data and provide error correction by spreading data
across multiple bits.
3.4 Flow and Error Control
Flow Control:
Flow control is necessary to ensure that the sender doesn't overwhelm the receiver, particularly when
the receiver cannot process the data as quickly as it is being sent.
Flow Control Mechanisms:
1. Stop-and-Wait:
The sender sends one frame and waits for the receiver's acknowledgment before sending the
next. Simple but introduces idle time during the waiting period.
2. Sliding Window:
This technique allows the sender to send multiple frames without waiting for acknowledgment.
The sender and receiver maintain windows that define the number of frames they can
send/receive.
Go-Back-N: Allows sending multiple frames but requires retransmission of all frames after an
error.
Selective Repeat: Allows retransmission of only the erroneous frame, improving throughput
compared to Go-Back-N.
Error Control (in Flow Control Context):
ACK/NAK: Acknowledgment (ACK) and Negative Acknowledgment (NAK) messages ensure that the
sender knows whether the data was received correctly.
Retransmission: If the sender does not receive an acknowledgment within a specified time
(timeout), it retransmits the data.
Key Takeaways:
Error Detection ensures that errors during transmission are identified (using parity bits, CRC, and
checksums).
Error Correction fixes these errors using ARQ protocols and error-correcting codes.
Flow Control ensures smooth data transfer between fast senders and slow receivers, using
techniques like Stop-and-Wait and Sliding Window.
Practice Questions:
1. Short Answer:
Explain the Sliding Window technique. How does it differ from Stop-and-Wait?
What is the role of CRC in error detection?
Describe Hamming Code and its use in error correction.
2. Multiple Choice:
Which of the following error detection methods is most effective for detecting burst errors?
a) Parity Bit
b) Checksum
c) CRC
d) Hamming Code
Which ARQ protocol retransmits only the lost frames?
a) Stop-and-Wait ARQ
b) Go-Back-N ARQ
c) Selective Repeat ARQ
d) All of the above
3. Long Answer:
Discuss the importance of flow control in data communication and the mechanisms used to
implement it.
3.5 Medium Access Control (MAC) Sublayer
The MAC Sublayer is a crucial part of the Data Link Layer that deals with how devices share access to a
communication medium in networks. The MAC layer ensures that when multiple devices are competing
for the same transmission medium, there is a system for managing this contention to prevent collisions
and ensure efficient communication.
3.5.1 Contention-Based Media Access Protocols
Contention-based protocols allow any device on the network to transmit data when it is ready, without
needing permission from other devices. However, since multiple devices may try to send data at the same
time, collisions can occur.
Basic Mechanism: Devices listen to the channel before transmitting. If the channel is free, they can
send their data. If two devices try to transmit simultaneously, a collision occurs, and the devices
must retransmit after a random backoff time.
Examples:
ALOHA: A simple contention-based protocol where devices send data as soon as they have it, but
must retransmit if a collision occurs.
Pure ALOHA: Devices send data at any time; if a collision occurs, they wait a random amount of
time and retransmit.
Slotted ALOHA: An improvement over pure ALOHA. The channel is divided into discrete time
slots, reducing the likelihood of collisions by ensuring that all devices transmit at the beginning of
a time slot.
Carrier Sense Multiple Access (CSMA): A protocol where devices listen to the channel before
sending data. If the channel is idle, they transmit; if it is busy, they wait.
CSMA/CD (Carrier Sense Multiple Access with Collision Detection): Used in Ethernet
networks, where devices listen for collisions during transmission and stop transmitting if a
collision is detected, then retransmit after a random backoff.
3.5.2 Random Access Protocols
Random Access Protocols are a subset of contention-based protocols where devices make independent
decisions about when to send data, with no centralized control.
Pure ALOHA and Slotted ALOHA are examples of random access protocols. In these, the device that
needs to send data does so without checking the channel's state. If the transmission results in a
collision, the device waits for a random time before attempting to send the data again.
Key Characteristics:
Uncoordinated Access: Devices transmit independently, leading to potential collisions.
Retransmission on Collision: Collisions result in data loss and require retransmissions.
Efficiency: As the number of devices increases, the efficiency of random access protocols
decreases due to an increase in collisions.
3.5.3 Polling-Based MAC Protocols
Polling-based protocols use a central device or station, called a polling master, that controls the access
to the transmission medium. The polling master sends a poll to each station, one at a time, asking if it has
data to send.
How It Works:
The master station controls the communication by polling each station in sequence.
A station that has data to transmit responds to the poll and sends its data.
This method ensures no collisions since only one station is allowed to transmit at a time.
Advantages:
No collisions, leading to more efficient use of the medium.
Allows more predictable and orderly access.
Disadvantages:
Centralized control can lead to bottlenecks.
The polling master must manage all traffic, which can become inefficient with a large number of
stations.
3.5.4 IEEE Standard 802.3 and Ethernet
IEEE 802.3 is the standard for Ethernet, which is one of the most widely used wired networking
technologies. Ethernet uses the CSMA/CD protocol for Medium Access Control.
How It Works:
Devices on an Ethernet network use Carrier Sense Multiple Access with Collision Detection
(CSMA/CD).
Before transmitting, each device checks if the channel is idle. If it is, the device starts
transmitting; if not, it waits.
If two devices transmit at the same time, a collision occurs, and the devices stop transmitting.
They then wait for a random backoff time before trying again.
Key Features:
Ethernet is widely used in Local Area Networks (LANs).
It is designed to work in bus and star topologies.
It supports high-speed data transfer (ranging from 10 Mbps to 100 Gbps and beyond).
3.5.5 IEEE Standard 802.4 Token Bus
IEEE 802.4 is a standard for Token Bus networks. The token bus is a type of network topology that uses
a token-passing method for managing access to the communication medium.
How It Works:
The network uses a bus topology, where all devices are connected to a single communication
line (bus).
A special token is passed around the network. Only the device holding the token is allowed to
transmit data.
After a device finishes transmitting, it passes the token to the next device.
Advantages:
No collisions occur because only one device can transmit at a time.
Provides efficient communication, especially in systems with many devices.
Disadvantages:
Token passing can introduce delays, especially if the token is lost or corrupted.
Relatively complex to implement compared to simpler contention-based protocols.
3.5.6 IEEE Standard 802.5 Token Ring
IEEE 802.5 defines the Token Ring network, which uses a ring topology and a token-passing method
for medium access control.
How It Works:
Devices are connected in a physical ring topology.
A token circulates around the network, and only the device holding the token can transmit data.
After transmitting, the device releases the token for the next device in the ring to use.
Key Features:
Collision-Free: No collisions since only one device can hold the token at a time.
Deterministic Access: Devices know when they will be able to transmit, ensuring predictable
behavior.
Redundancy: If one device fails, the token can still circulate, as the system can bypass the faulty
device.
3.5.7 Address Resolution Protocol (ARP)
ARP is a protocol used to map an IP address to a MAC address. It is essential for devices in a network to
know each other's physical address for data to be correctly routed within a local area network (LAN).
How It Works:
When a device knows the IP address of the destination device but not its MAC address, it sends an
ARP request to the network.
The request is a broadcast asking, "Who has IP address X?" The device with that IP address sends
an ARP reply, containing its MAC address.
Use Case: ARP is essential for communication within a local network (e.g., between devices in a
LAN), where devices need to map IP addresses to physical hardware addresses.
3.5.8 Reverse Address Resolution Protocol (RARP)
RARP performs the reverse of ARP: it maps a MAC address to an IP address. This is useful when a device
needs to discover its IP address, such as in the case of diskless workstations that don't have any storage
and thus can't store an IP address.
How It Works:
A device broadcasts a RARP request with its MAC address, asking "What is my IP address?"
A RARP server receives the request, looks up the MAC address in its database, and sends the IP
address back to the device.
Use Case: RARP is less commonly used today, as DHCP (Dynamic Host Configuration Protocol) has
replaced it for assigning IP addresses to devices on a network.
Key Takeaways:
MAC Sublayer: Manages how devices access the shared transmission medium.
Contention-Based: Allows devices to transmit independently, but collisions may occur (e.g., ALOHA,
CSMA).
Polling-Based: A central station polls devices for data, ensuring no collisions.
Token Passing: Only the device with the token can transmit, avoiding collisions (e.g., Token Bus,
Token Ring).
ARP and RARP: Protocols for mapping IP addresses to MAC addresses (ARP) and vice versa (RARP).
Practice Questions:
1. Short Answer:
What is the main difference between contention-based and polling-based MAC protocols?
Explain how CSMA/CD works and its role in Ethernet networks.
2. Multiple Choice:
Which of the following protocols is used for mapping an IP address to a MAC address?
a) ARP
b) RARP
c) CSMA/CD
d) Token Passing
3. Long Answer:
Discuss the working of the Token Ring network and compare it with Ethernet in terms of
collision handling and access control.