0% found this document useful (0 votes)
2 views

Computer Networking_ the Data Link Layer (Unit 3)

This report discusses the Data Link Layer of computer networking, detailing its functions such as framing, addressing, error detection and correction, flow control, and access control. It also covers protocols like HDLC and PPP, as well as the channel allocation problem and various multiple access methods. The document emphasizes the importance of these elements in ensuring reliable communication over networks.

Uploaded by

shakyalish2059
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Computer Networking_ the Data Link Layer (Unit 3)

This report discusses the Data Link Layer of computer networking, detailing its functions such as framing, addressing, error detection and correction, flow control, and access control. It also covers protocols like HDLC and PPP, as well as the channel allocation problem and various multiple access methods. The document emphasizes the importance of these elements in ensuring reliable communication over networks.

Uploaded by

shakyalish2059
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Computer Networking: The Data Link Layer (Unit

3)
This report provides detailed answers to the assignment questions on the Data Link Layer in
computer networking. Drawing from fundamental networking principles and specifications, it
covers the various functions, protocols, and technologies that operate at this crucial layer of the
OSI model.

Functions of the Data Link Layer


The Data Link Layer is the second layer of the OSI (Open Systems Interconnection) model,
positioned above the Physical Layer and below the Network Layer. It provides reliable
communication over physical media and ensures error-free data delivery between devices on
the same network segment. The Data Link Layer performs several critical functions:

Framing
Framing involves breaking the continuous data stream from the Network Layer into manageable
frames for transmission over the network. Each frame typically includes a header and trailer that
delineate its boundaries [1] . This process helps in synchronization, allowing the receiver to
identify the start and end of each frame, making it easier to handle and interpret the transmitted
data [1] .

Addressing
The Data Link Layer adds addressing information to the frame, including source and destination
addresses, which are typically MAC (Media Access Control) addresses [1] . These hardware
addresses are crucial for identifying devices on the same network segment. The source address
indicates the sender, while the destination address ensures that the frame reaches the intended
recipient [1] .

Error Detection and Correction


This layer includes mechanisms for error detection and, in some cases, error correction.
Techniques such as checksums, Cyclic Redundancy Check (CRC), and parity bits are used to
identify transmission errors [1] . When errors are detected, the receiver can request
retransmission of corrupted frames or take corrective measures, enhancing the reliability of
communication [1] .
Flow Control
Flow control mechanisms manage the rate of data transmission between connected devices to
prevent data loss or buffer overflow [1] . It essentially serves as a speed-matching mechanism
between sender and receiver. Techniques include buffering, acknowledgments, and windowing.
By regulating the data flow rate, flow control prevents data loss due to speed mismatches and
optimizes the use of network resources [1] .

Access Control
Access control regulates how multiple devices share access to the communication medium,
especially in shared network environments [1] . It prevents multiple devices from attempting to
transmit data simultaneously, which would cause collisions. Techniques like Carrier Sense
Multiple Access with Collision Detection (CSMA/CD) coordinate access to the communication
medium, particularly in Ethernet networks [1] .

Data Link Control, Including Framing and Flow Control


Data Link Control (DLC) provides three fundamental services within the Data Link Layer: framing,
flow control, and error control [1] . These functions collectively contribute to reliable and efficient
communication between network devices.

Framing in Data Link Control


Framing is the process of breaking a stream of bits into manageable frames for transmission
over the network [1] . It defines how the beginning and end of each frame are identified.
The sender adds special bit patterns or characters at the beginning and end of each frame to
mark its boundaries. The receiver uses these markers to identify and extract the frames from the
bitstream [1] . Without proper framing, the receiver would be unable to determine where one
frame ends and another begins, making communication impossible.
Different protocols use different framing methods:
Character-Oriented Protocols: Use special characters (like STX and ETX) to mark frame
boundaries
Bit-Oriented Protocols: Use flag sequences (like 01111110 in HDLC)
Length-Field Framing: Include a field that specifies the length of the frame

Flow Control in Data Link Control


Flow control manages the rate of data transmission between the sender and receiver to prevent
the receiver from being overwhelmed by a fast sender [1] . This is crucial for preventing data loss
and congestion in the network.
Flow control mechanisms include:
1. Stop-and-Wait Flow Control:
The sender transmits a frame and then waits for an acknowledgment before sending
the next one
Simple but inefficient, especially over high-latency connections
2. Sliding Window Flow Control:
Allows the sender to transmit multiple frames before receiving acknowledgments
The "window" represents the number of frames that can be sent without
acknowledgment
More efficient, especially for high-bandwidth or high-latency connections
When acknowledgments are received, the window "slides" forward, allowing more
frames to be sent
Flow control is essential for optimizing network performance, preventing buffer overflow at the
receiver, and ensuring that data is not lost due to timing mismatches between devices [1] .

Error Detection and Correction Techniques


Error detection and correction techniques ensure the integrity of data transmitted over
potentially noisy or unreliable communication channels. These techniques are crucial for
maintaining data accuracy in the Data Link Layer.

Error Detection Techniques

Parity Check
The simplest error detection method involves adding a parity bit to ensure that the number of 1s
in the data is even (even parity) or odd (odd parity). If the received data doesn't match the
expected parity, an error is detected. However, parity checks can only detect odd numbers of
bit errors and cannot identify which specific bits are in error.

Checksum
Checksums involve summing up the bits of data in specific blocks and appending the sum as a
checksum value. The receiver recalculates the checksum based on the received data and
compares it with the received checksum. If they don't match, an error is detected. Checksums
are commonly used in Internet protocols like IP and TCP.

Cyclic Redundancy Check (CRC)


CRC is a more sophisticated error detection technique based on binary division [1] . The sender
treats the data as a binary number and divides it by a predetermined divisor (known as the
generator polynomial). The remainder of this division becomes the CRC value that is appended
to the data.
The receiver performs the same division on the received data and checks if the remainder
matches the received CRC. CRC is more reliable than parity and checksum for detecting burst
errors and is widely used in data link layer protocols like Ethernet and HDLC.
Error Correction Techniques

Automatic Repeat reQuest (ARQ)


When an error is detected, the receiver requests the sender to retransmit the data. There are
several types of ARQ:
1. Stop-and-Wait ARQ:
The sender waits for an acknowledgment after sending each frame
If an acknowledgment isn't received within a timeout period or a negative
acknowledgment is received, the frame is retransmitted
2. Go-Back-N ARQ:
The sender continues sending frames up to a window size
If an error is detected, all frames from the erroneous one onward are retransmitted
Simpler for the receiver but potentially wasteful of bandwidth
3. Selective Repeat ARQ:
Only the erroneous frames are retransmitted
More efficient than Go-Back-N but requires more complex buffering at the receiver

Forward Error Correction (FEC)


FEC adds redundant data (error-correcting codes) that allows the receiver to detect and correct
errors without requesting retransmission. Examples include:
Hamming Codes: Can correct single-bit errors and detect double-bit errors
Reed-Solomon Codes: Effective against burst errors, used in CDs, DVDs, and deep space
communications
Turbo Codes: Used in 3G/4G mobile communications for high performance in noisy channels
The choice between error detection/correction techniques depends on the specific requirements
of the communication system, including the expected error rate, the cost of retransmission, and
available processing power.

High-Level Data Link Control (HDLC) and Point-to-Point Protocol (PPP)

High-Level Data Link Control (HDLC)


High-Level Data Link Control (HDLC) is a bit-oriented, synchronous data link layer protocol
defined by the International Organization for Standardization (ISO) in standards ISO/IEC
13239 [1] . It provides reliable and versatile communication over point-to-point and multipoint
links.
HDLC Frame Structure
HDLC frames consist of several fields [1] :
1. Flag Field: Marks the beginning and end of a frame with the bit pattern 01111110.
2. Address Field: Identifies the destination station. In point-to-point configurations, this field is
often not used, but in multipoint configurations, it helps identify the source and destination
stations [1] .
3. Control Field: Contains information about the type of frame (information, supervisory, or
unnumbered) and control functions [1] .
4. Information Field: Contains the actual data payload. The length can vary based on
implementation [1] .
5. Frame Check Sequence (FCS): Contains a cyclic redundancy check (CRC) for error
detection [1] .

Bit Stuffing in HDLC


HDLC uses a technique called bit stuffing to ensure frame structure integrity when there are long
sequences of consecutive bits with the same value [1] . If five consecutive 1s appear in the data,
a 0 is automatically inserted (stuffed) after them to prevent the pattern from being mistaken for
a flag. The receiver automatically removes these stuffed bits.

HDLC Modes of Operation


1. Normal Response Mode (NRM): Primary station initiates communication, secondary stations
respond
2. Asynchronous Response Mode (ARM): Secondary stations can initiate communication
3. Asynchronous Balanced Mode (ABM): Both stations can initiate communication

Point-to-Point Protocol (PPP)


Point-to-Point Protocol (PPP) is a data link layer protocol used to establish direct connections
between two network nodes [1] . It's commonly used in dial-up connections and dedicated point-
to-point links, providing a standard method for encapsulating and transmitting network-layer
protocols over various physical media.

PPP Architecture
PPP consists of three main components [1] :
1. A method for encapsulating datagrams over serial links
2. A Link Control Protocol (LCP) for establishing, configuring, and testing connections
3. Network Control Protocols (NCPs) for establishing and configuring different network-layer
protocols
PPP Operation
1. Establishment Phase: Using LCP to establish and configure the link
2. Authentication Phase (optional): Using protocols like PAP or CHAP
3. Network Layer Protocol Phase: Configuring network-layer protocols using NCPs
4. Termination Phase: Closing the link when communication completes

PPP Authentication
PAP (Password Authentication Protocol): A simple protocol where credentials are sent in
plain text
CHAP (Challenge-Handshake Authentication Protocol): A more secure protocol using a
challenge-response mechanism
Both HDLC and PPP are crucial for establishing reliable point-to-point connections, with PPP
being more widely used in modern networks due to its flexibility and additional features like
authentication.

The Channel Allocation Problem


The channel allocation problem in the data link layer refers to the challenge of efficiently and
fairly distributing a shared communication medium among multiple devices [2] . This problem is
particularly relevant in networks where multiple devices share the same physical communication
channel, such as wireless networks or traditional Ethernet networks.

Core Issues
When multiple devices transmit simultaneously on a shared medium, their signals can interfere
with each other, leading to collisions and data corruption [2] . The channel allocation problem
involves finding ways to allow multiple devices to share the medium while minimizing collisions
and ensuring fair access.

Approaches to Channel Allocation

Static Channel Allocation


In static allocation, the channel is divided into fixed parts, and each part is exclusively allocated
to a specific device [2] . While simple and collision-free, this approach is inefficient if some
devices are idle while others are busy. An example is Frequency Division Multiplexing (FDM),
where each device gets a fixed frequency band.

Dynamic Channel Allocation


In dynamic allocation, the channel is allocated based on demand [2] . Devices request access to
the channel when they need to transmit. Examples include random access protocols like CSMA.
This approach is more efficient but has potential for conflicts and overhead in managing
allocations.
Multiplexing Techniques
Multiplexing techniques are used to address the channel allocation problem:
1. Frequency Division Multiplexing (FDM):
Divides the frequency spectrum into non-overlapping bands
Each device is assigned a unique frequency band
Devices transmit simultaneously without interfering
Analogous to different radio stations broadcasting on different frequencies [2]
2. Time Division Multiplexing (TDM):
Divides time into fixed-size slots
Each device is allocated specific time slots
Devices take turns using the channel
Similar to taking turns speaking in a meeting [2]
3. Code Division Multiplexing (CDM):
Assigns each device a unique code
All devices transmit simultaneously using their codes
The receiver extracts the intended signal using the same code
Used in CDMA cellular systems
The channel allocation problem remains a fundamental challenge in network design, with
different solutions offering various trade-offs in terms of efficiency, fairness, complexity, and
overhead.

Comparison of Multiple Access Methods: ALOHA, CSMA, CSMA/CD, CSMA/CA


Multiple access methods allow several devices to share a common communication channel.
Here's a comparison of key random access protocols:

ALOHA

Pure ALOHA
Description: Devices transmit data whenever they have data to send, without checking if the
channel is busy [2] . If a collision occurs (detected by lack of acknowledgment), devices wait
for a random time and then retransmit.
Analogy: People talking in a group without any order; anyone can speak whenever they
want [2] .
Advantages: Simple and easy to implement.
Disadvantages: High collision probability, especially under heavy load. Maximum theoretical
efficiency is only about 18%.
Applications: Early wireless networks and satellite communications.
Slotted ALOHA
Description: Time is divided into discrete slots, and devices can only transmit at the
beginning of a slot [2] . This reduces the chance of partial collisions.
Analogy: Group discussion organized into time slots; people can only start speaking at the
beginning of a slot [2] .
Advantages: Better efficiency than Pure ALOHA, with a theoretical maximum of about 37%.
Disadvantages: Still susceptible to collisions and requires slot synchronization.
Applications: Enhanced versions are used in cellular networks for control channel access.

Carrier Sense Multiple Access (CSMA)


Description: Devices listen to the channel before transmitting to check if it's busy [2] . If the
channel is clear, they transmit; if busy, they wait for a random time before attempting again.
Analogy: Checking if others are talking before speaking in a group [2] .
Advantages: Better performance than ALOHA due to the carrier sensing mechanism.
Disadvantages: Collisions can still occur due to propagation delay (stations might not detect
another station's transmission).
Applications: Early versions of wireless LANs.

CSMA/CD (Carrier Sense Multiple Access with Collision Detection)


Description: Extends CSMA by allowing devices to detect collisions while transmitting [2] . If a
collision is detected, the device stops transmitting, sends a jam signal, and then waits for a
random time before retrying.
Analogy: People can hear if someone else is talking simultaneously and stop if they detect a
collision [2] .
Advantages: Faster collision resolution compared to CSMA, improving efficiency.
Disadvantages: Not suitable for wireless networks where a device cannot listen while
transmitting.
Applications: Traditional Ethernet (IEEE 802.3) networks.

CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance)


Description: Designed for wireless networks, it aims to avoid collisions rather than detect
them [2] . Devices listen before transmitting, wait for a random period (backoff time), and
may use mechanisms like Request to Send (RTS) and Clear to Send (CTS) to reserve the
channel.
Analogy: People announce their intention to speak and wait for acknowledgment before
starting.
Advantages: Better suited for wireless networks where collision detection is challenging.
Disadvantages: Introduces overhead due to the RTS/CTS mechanism.
Applications: Wi-Fi (IEEE 802.11) networks.
The evolution of these multiple access methods reflects the increasing sophistication of network
technologies. CSMA/CD dominates in wired Ethernet networks, while CSMA/CA is standard in
wireless Wi-Fi networks, each addressing specific challenges in their respective domains.

Controlled Access Methods: Reservation, Polling, Token Passing


Controlled access protocols provide a more organized approach to channel access compared to
random access protocols. They use mechanisms to regulate and coordinate which device can
transmit data at a given time [3] .

Reservation
Description: In reservation-based protocols, devices explicitly request permission to transmit
before actually sending data [3] . The timeline is divided into intervals: a reservation interval
(fixed length) and a data transmission interval (variable length) [3] . During the reservation
interval, each device can place a reservation in its slot if it has data to send. After the
reservation phase, devices transmit their data in the order of their reservations.
Analogy: Similar to students raising their hands to ask the teacher for permission before
speaking [3] .
Advantages:
Eliminates collisions during data transmission
Efficient under high load conditions
Disadvantages:
Introduces overhead due to the reservation phase
Not efficient for bursty traffic patterns
Applications: Satellite networks with significant propagation delay and wireless networks
with quality of service requirements [3] .

Polling
Description: A central controller (master or polling station) queries devices one by one to
check if they have data to transmit [3] . Only the device being polled is allowed to transmit
during its turn. The controller performs two essential functions: selection (telling a device it
has been selected) and poll (asking if it has data to send) [3] .
Analogy: The teacher goes around the room, asking each student if they have something to
say [3] .
Advantages:
Simple and orderly process
No collisions during data transmission
The central controller can enforce priorities
Disadvantages:
Overhead due to polling messages
Potential single point of failure (the central controller)
High latency with many devices to poll
Applications: Bluetooth communication, industrial networks requiring predictability, and
legacy terminal-to-mainframe communications [3] .

Token Passing
Description: Devices pass a special token or permission to transmit in a predefined order,
typically in a logical ring [3] . Only the device holding the token is allowed to transmit. After
finishing transmission or if it has no data to send, the device passes the token to the next
device in the sequence [3] .
Analogy: Like a special object (talking stick) that students must hold to speak [3] .
Advantages:
No collisions during data transmission
Distributed approach without a central controller
Fair access for all devices
Disadvantages:
Token loss or duplication can disrupt the network
Overhead in token passing
Potential high latency with many devices
Applications: Token Ring networks, FDDI networks, and industrial systems requiring
deterministic performance [3] .
Controlled access methods provide more structured access to the communication channel
compared to random access methods. While they introduce some overhead, they can be more
efficient under high load conditions and offer more predictable performance. The choice among
these methods depends on specific network requirements.

Channelization Techniques: FDMA, TDMA, CDMA


Channelization techniques enable multiple users to share a common communication channel
efficiently by dividing the available bandwidth (frequency, time, or code) to avoid interference.

Frequency Division Multiple Access (FDMA)


Description: FDMA divides the available frequency spectrum into separate non-overlapping
frequency bands [2] . Each user is allocated a unique frequency band for the entire duration
of communication. Users transmit simultaneously without interfering because they operate
on different frequencies.
Analogy: Like a radio tuner where different stations operate on different frequencies [2] .
Advantages:
Simple implementation
No precise timing or synchronization required
No guard time needed between transmissions
Effective for continuous transmission
Disadvantages:
Inefficient if users aren't actively transmitting
Requires guard bands between frequencies
Limited by available spectrum
Applications: First-generation (1G) cellular systems like AMPS, AM/FM radio broadcasting,
cable TV systems.

Time Division Multiple Access (TDMA)


Description: TDMA divides available time into multiple time slots [2] . Each user is allocated
specific time slots for transmission. Users take turns using the same frequency channel,
transmitting their data in their assigned time slots.
Analogy: Like a round-table discussion where participants take turns speaking during
designated times.
Advantages:
More efficient than FDMA for bursty traffic
No guard bands required between users
Flexible bandwidth allocation
Works well with digital technology
Disadvantages:
Requires precise time synchronization
Guard times reduce efficiency
Increases device complexity due to burst mode operation
Applications: Second-generation (2G) cellular systems like GSM, Digital Enhanced Cordless
Telecommunications (DECT), satellite communication systems.

Code Division Multiple Access (CDMA)


Description: CDMA assigns a unique code to each user. All users transmit simultaneously
over the same frequency band, but each uses a different code. The receiver extracts the
intended signal using the same code used by the sender, treating other signals as noise. It's
based on spread spectrum technology.
Analogy: Like a room with multiple conversations in different languages—if you know a
specific language, you can focus on that conversation while hearing others as background
noise.
Advantages:
Efficient spectrum usage
Increased capacity compared to FDMA/TDMA
Inherent resistance to interference and jamming
Soft handoff capability in cellular systems
No need for precise time/frequency coordination
Disadvantages:
Complex signal processing required
Power control is crucial to prevent "near-far" problem
Background noise from other users
Applications: Third-generation (3G) cellular systems like UMTS, GPS (Global Positioning
System), certain wireless LANs.
Each channelization technique offers distinct advantages and trade-offs. The choice depends on
factors such as traffic patterns, number of users, available bandwidth, and quality of service
requirements. Modern communication systems often combine different techniques to optimize
performance.

Ethernet Standards in Wired LAN


Ethernet is the most widely used standard for wired Local Area Networks (LANs). Developed
initially by Xerox in the 1970s and later standardized by IEEE as 802.3, Ethernet has evolved
significantly over the years.

Basic Ethernet Concepts


Ethernet operates at the Physical and Data Link layers of the OSI model. It traditionally used
CSMA/CD for channel access in half-duplex mode, though modern networks typically operate in
full-duplex mode, eliminating collisions. Ethernet frames have a standard format including
destination and source MAC addresses, type field, data payload, and Frame Check Sequence
(FCS) for error detection.

Traditional Ethernet Standards

10BASE5 (Thick Ethernet)


The original Ethernet standard
Speed: 10 Mbps
Media: Thick coaxial cable (0.4 inches diameter)
Maximum segment length: 500 meters
Also known as "Thicknet"
10BASE2 (Thin Ethernet)
Speed: 10 Mbps
Media: Thin coaxial cable (0.2 inches diameter)
Maximum segment length: 185 meters
Also known as "Thinnet" or "Cheapernet"

10BASE-T
Speed: 10 Mbps
Media: Twisted-pair copper cable (Category 3 or higher)
Maximum segment length: 100 meters
Uses RJ-45 connectors and a star topology

Fast Ethernet Standards

100BASE-TX
Speed: 100 Mbps
Media: Twisted-pair copper cable (Category 5 or higher)
Maximum segment length: 100 meters
Uses two pairs of wires
The most common Fast Ethernet standard

100BASE-FX
Speed: 100 Mbps
Media: Multi-mode fiber optic cable
Maximum segment length: 412 meters (half-duplex), 2 kilometers (full-duplex)
Uses two fiber strands

Gigabit Ethernet Standards

1000BASE-T
Speed: 1 Gbps
Media: Twisted-pair copper cable (Category 5e or higher)
Maximum segment length: 100 meters
Uses all four pairs of wires
Most common gigabit standard for end-user connections
1000BASE-SX
Speed: 1 Gbps
Media: Multi-mode fiber optic cable
Maximum segment length: 220-550 meters depending on fiber type
Optimized for short distances

1000BASE-LX
Speed: 1 Gbps
Media: Single-mode or multi-mode fiber
Maximum segment length: Up to 5 kilometers for single-mode
Optimized for longer distances

10 Gigabit Ethernet Standards

10GBASE-T
Speed: 10 Gbps
Media: Twisted-pair copper cable (Category 6A or higher)
Maximum segment length: 100 meters

10GBASE-SR
Speed: 10 Gbps
Media: Multi-mode fiber
Maximum segment length: 26-400 meters depending on fiber type
For short-range data center connections

10GBASE-LR
Speed: 10 Gbps
Media: Single-mode fiber
Maximum segment length: Up to 10 kilometers

Higher-Speed Ethernet Standards


More recent standards include 25, 40, 100, 200, and 400 Gigabit Ethernet, primarily used in
data centers and high-performance computing environments. These standards use various
media options including multi-mode and single-mode fiber.
Ethernet's success and longevity can be attributed to its scalability, backward compatibility, and
adaptability as technology has advanced from 10 Mbps to 400 Gbps while maintaining the same
fundamental principles.
FDDI in Wired LAN
Fiber Distributed Data Interface (FDDI) is a standard for data transmission in local area networks
that uses fiber optic cable as the physical medium. Developed in the 1980s by ANSI, it became
popular in the 1990s as a high-speed backbone for enterprise networks.

FDDI Architecture

Dual Ring Topology


FDDI uses a dual ring architecture with a primary ring and a secondary ring. Under normal
operation, data travels on the primary ring in one direction (typically clockwise). The secondary
ring operates in the opposite direction and provides fault tolerance. If a break occurs in the
primary ring, the secondary ring is used to create a new complete path, forming a "wrapped"
configuration.

Token Passing Access Method


FDDI uses a token-passing protocol similar to Token Ring but with enhancements. A token
circulates around the ring, and a station that wishes to transmit must capture it. After capturing
the token,

1. Unit-3.pdf
2. https://www.sanfoundry.com/controlled-access-protocols-in-computer-network/
3. https://www.scaler.in/controlled-access-protocols/

You might also like