Notes UNIT-2
Notes UNIT-2
The Data Link Control (DLC) deals with procedures for communication between two
adjacent nodes no matter whether the link is dedicated or broadcast. Data link control
functions include framing, flow control, and error control.
Framing
The data link layer needs to pack bits into frames, so that each frame is distinguishable from
another.
Framing in the data link layer divides a message from one source to a destination by adding
a sender address and a destination address. The destination address defines where the packet
is to go; the sender address helps the recipient acknowledge the receipt.
Fixed-Size Framing
In fixed-size framing, there is no need for defining the boundaries of the frames; the size
itself can be used as a delimiter.
An example of this type of framing is the ATM wide-area network, which uses frames of
fixed size called cells.
Variable-Size Framing
Variable-size framing, we need a way to define the end of the frame and the beginning of
the next. Historically, two approaches were used for this purpose: a character-oriented
approach and a bit-oriented approach
Character Oriented
In a character-oriented protocol, data to be carried are 8-bit characters from a coding system
such as ASCII (see Appendix A). The header, which normally carries the source and
destination addresses and other control information, and the trailer, which carries error
detection or error correction redundant bits, are also multiples of 8 bits. To separate one
frame from the next, an 8-bit (I-byte) flag is added at the beginning and the end of a frame.
The flag, composed of protocol-dependent special characters, signals the start or end of a
frame.
Any pattern used for the flag could also be part of the information. If this happens, the
receiver, when it encounters this pattern in the middle of the data, thinks it has reached the
end of the frame. To fix this problem, a byte-stuffing strategy was added to character-
oriented framing.
In byte stuffing (or character stuffing), a special byte is added to the data section of the
frame when there is a character with the same pattern as the flag. The data section is stuffed
with an extra byte. This byte is usually called the escape character (ESC), which has a
predefined bit pattern. Whenever the receiver encounters the ESC character, it removes it
from the data section and treats the next character as data, not a delimiting flag.
Byte stuffing is the process of adding 1 extra byte wheneverthere is a flag or escape
character in the text.
Bit-Oriented Protocols
Bit stuffing is the process of adding one extra 0 whenever five consecutive 18 follow a 0
in the data, so that the receiver does not mistake the pattern 0111110 for a flag.
This means that if the flaglike pattern 01111110 appears in the data, it will change to
011111010 (stuffed) and is not mistaken as a flag by the receiver. The real flag 01111110 is
not stuffed by the sender and is recognized by the receiver
Flow Control
Flow control coordinates the amount of data that can be sent before receiving an
acknowledgment and is one of the most important duties of the data link layer. In most
protocols, flow control is a set of procedures that tells the sender how much data it can
transmit before it must wait for an acknowledgment from the receiver. The flow of data
must not be allowed to overwhelm the receiver. Any receiving device has a limited speed at
which it can process incoming data and a limited amount of memory in which to store
incoming data. The receiving device must be able to inform the sending device before those
limits are reached and to request that the transmitting device send fewer frames or stop
temporarily.
Flow control refers to a set of procedures used to restrict the amount of data that the
sender can send before waiting for acknowledgment.
Simplest Protocol
It is a unidirectional protocol in which data frames are traveling in only one direction from the
sender to receiver. We assume that the receiver can immediately handle any frame it
receives with a processing time that is small enough to be negligible. The data link layer of
the receiver immediately removes the header from the frame and hands the data packet to its
network layer, which can also accept the packet immediately. In other words, the receiver
can never be overwhelmed with incoming frames.
Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and
retransmitting of the frame when the timer expires.
In this sequence number is based on modulo 2 arithmetic .Frame having numer in alternate
010101….. In stop and wait ARQ, sequence no. define the frame to be sent and the
acknowledgement no. of ACK frame define the next frame to be expected.
Sender control variable (Sn) storing the sequence number of next frame to be sent .
e.g. if Sn store 0 , we send frame 0
Receiver control variable(Rn) storing the sequence number of next frame to be expected
.
Stop-and-Wait ARQ. Frame 0 is sent and acknowledged. Frame 1 is lost and resent after
the time-out. The resent frame 1 is acknowledged and the timer stops. Frame 0 is sent and
acknowledged, but the acknowledgment is lost. The sender has no idea if the frame or the
acknowledgment is lost, so after the time-out, it resends frame 0, which is acknowledged.
Go Back – NARQ (Sliding Window protocol)
The first is called Go-Back-N Automatic Repeat Request (the rationale for the
name will become clear later). In this protocol we can send several frames before
receiving acknowledgments; we keep a copy of these frames until the acknowledgments
arrive.
Sequence Numbers
If the header of the frame allows m bits for the sequence number, the sequence numbers
range from 0 to 2m - 1. For example, if m is 4, the only sequence numbers are 0
through 15 inclusive. However, we can repeat the sequence. So the sequence numbers
are
0, 1,2,3,4,5,6, 7,8,9, 10, 11, 12, 13, 14, 15,0, 1,2,3,4,5,6,7,8,9,10, 11, ...
Sliding Window
The sliding window is an abstract concept that defines the range of sequence numbers that is
the concern of the sender and receiver.
The maximum size of the window is 2m – 1.
The send window can slide oneor more slots when a valid acknowledgment arrives.
In Go-Back-N ARQ, the size of the send window must be less than 2m; the size of the
receiver window is always 1.
Stop-and-Wait ARQ is a special case of Go-Back-N ARQ in which the size of the send
window is 1.
Selective Repeat ARQ
Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps track of only one
variable, and there is no need to buffer out-of-order frames; they are simply discarded. However,
this protocol is very inefficient for a noisy link. There is another mechanism that does not resend
N frames when just one frame is damaged. Only the damaged frame is resent. This mechanism is
called Selective Repeat ARQ. It is more efficient for noisy links
Window Size
In Selective Repeat ARQ, the size of the sender and receiver window must be at most one-
half of 2m
In Selective Repeat ARQ, the size of the sender and receiver window
must be at most one-half of 2m.
Error Control
Error control is both error detection and error correction. It allows the receiver to inform the
sender of any frames lost or damaged in transmission and coordinates the retransmission of
those frames by the sender.
In the data link layer, the term error control means error detection and retransmission.
Single-bit Error
The term single-bit error means that only one bit of given data unit (such as a
byte, character, or data unit) is changed from 1 to 0 or from 0 to 1 as shown in Fig. 3.2.1.
0101110010101110 Sent
Burst Error
The term burst error means that two or more bits in the data unit have changed from 0 to
1 or vice-versa. Note that burst error doesn’t necessary means that error occurs in
consecutive bits. The length of the burst error is measured from the first corrupted bit to
the last corrupted bit. Some bits in between may not be corrupted.
0101110010101110 Sent
Bits in error
0101000001101110
Detection Versus Correction
The correction of errors is more difficult than the detection. In error detection, we are
looking only to see if any error has occurred. The answer is a simple yes or no. We are
not even interested in the number of errors. A single-bit error is the same for us as a
burst error.
In error correction, we need to know the exact number of bits that are corrupted and
more importantly, their location in the message
Hamming Distance
The Hamming distance between two words (of the same size) is the number of differences
between the corresponding bits. We show the Hamming distance between two words x and
y as d(x, y). The Hamming distance can easily be found if we apply the XOR operation (ffi)
on the two words and count the number of Is in the result.
Redundancy
To detect or correct errors, we need to send extra (redundant) bits with data .
Redundant bits are extra binary bits that are generated and added to the information-
carrying bits of data transfer to ensure that no bits were lost during the data transfer. A
parity bit is a bit appended to a data of binary bits to ensure that the total number of 1's in
the data are even or odd.
Blocks of data from the source are subjected to a check bit or Parity bit generator form,
where a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and
0 is added if it contains an even number of 1’s. At the receiving end the parity bit is
computed from the received data bits and compared with the received parity bit, as shown
in Fig. 3.2.3. This scheme makes the total number of 1’s even, that is why it is called even
parity checking.
Figure 3.2.3 Even-parity checking scheme
Performance
The checksum detects all errors involving an odd number of bits. It also detects most
errors involving even number of bits.
(a) (b)
Figure 3.2.5 (a) Sender’s end for the calculation of the checksum, (b) Receiving end for
checking the checksum
2r>= d+r+1
The value of r must be determined by putting in the value of d in the relation. For
example, if d is 7, then the smallest value of r that satisfies the above relation is 4. So the
total bits, which are to be transmitted is 11 bits (d + r = 7 + 4 =11).
11 10 9 8 7 6 5 4 3 2 1
ddd r d dd r d r r
Redundant bits
Figure 3.2.8 Positions of redundancy bits in hamming code
Figure 3.2.9 Use of Hamming code for error correction for a 4-bit data
Figure 3.2.9 shows how hamming code is used for correction for 4-bit numbers (d4d3d2d1)
with the help of three redundant bits (r3r2r1). For the example data 1010, first
r1(0) is calculated considering the parity of the bit positions, 1, 3, 5 and 7. Then the parity
bits r2 is calculated considering bit positions 2, 3, 6 and 7. Finally, the parity bits r4is
calculated considering bit positions 4, 5, 6 and 7 as shown. If any corruption occurs in any
of the transmitted code 1010010, the bit position in error can be found out by calculating
r3r2r1 at the receiving end. For example, if the received code word is 1110010, the
recalculated value of r3r2r1is 110, which indicates that bit position in error is 6, the
decimal value of 110.
Data Link Control
The two main functions of the data link layer are data link control and media access
control. The first, data link control, deals with the design and procedures for communication
between two adjacent nodes: node-to-node communication
.
Data link control functions include framing, flow and error control, and software
implemented protocols that provide smooth and reliable transmission of frames between
nodes.
HDLC
High-level Data Link Control (HDLC) is a bit-oriented protocol for communication
over point-to-point and multipoint links. It implements the ARQ mechanisms .
HDLC provides two common transfer modes that can be used in different configurations:
normal response mode (NRM) and asynchronous balanced mode (ABM).
Normal Response Mode
In normal response mode (NRM), the station configuration is unbalanced. We have one
primary station and multiple secondary stations. A primary station can send commands;
asecondary station can only respond. The NRM is used for both point-to-point and
multiple-point link
Flag field. The flag field of an HDLC frame is an 8-bit sequence with the bit pattern
01111110 that identifies both the beginning and the end of a frame.
Address field. The second field of an HDLC frame contains the address of the secondary
station. If a primary station created the frame, it contains a toaddress. If a secondary creates
the frame, it contains a from address.
Control field. The control field is a 1- or 2-byte segment of the frame used for flow and
error control.
Information field. The information field contains the user's data from the network
layer or management information. Its length can vary from one network to another.
FCS field. The frame check sequence (FCS) is the HDLC error detection field. It can
contain either a 2- or 4-byte ITU-T CRC.
Control Field
.
MAC (Multiple Access Control) Protocols
In random access or contention methods, no station is superior to another station and none
is assigned the control over another. No station permits, or does not permit, another station
to send. At each instance, a station that has data to send uses a procedure defined by the
protocol to make a decision on whether or not to send.
Random Access
In random access or contention methods, no station is superior to another station and
none is assigned the control over another. No station permits, or does not permit,
another station to send. At each instance, a station that has data to send uses a procedure
defined by the protocol to make a decision on whether or not to send. This decision
depends on the state of the medium (idle or busy).
ALOHA /Pure ALOHA
The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant
protocol. The idea is that each station sends a frame whenever it has a frame to send.
However, since there is only one channel to share, there is the possibility of collision
between frames from different stations.
The medium is shared between the stations. When a station sends data, another station may
attempt to do so at the same time. The data from the two stations collide and become
garbled.
In ALOHA there are 2 types of collision occur partial collision and complete collision.
The vulnerable time, in which there is a possibility of collision. Vulnerable period is the time
for the frame getting out collide frame each other . In the ALOHA vulnerable period has the
length of 2 frame time.
Performance of ALOHA
Assumption
1. All frame have a fixed length of one time unit.
2. Infinite user population
3. Offered load is modeled as a poission process with rate G.
=G x e-2G.
The throughput for pure ALOHA is S
The maximum throughput Smax=0.184 when G =(1/2).
Because a station is allowed to send only at the beginning of the synchronized time
slot, if a station misses this moment, it must wait until the beginning of the next time
slot. This means that the station which started at the beginning of this slot has already
finished sending its frame. Of course, there is still the possibility of collision if two stations
try to send at the beginning of the same time slot. However, the vulnerable time is
now reduced to one-half, equal to Tfr .
the vulnerable time for slotted ALOHA is one-half that of pure ALOHA.
Throughput It can be proved that the average number of successful transmissions for
slotted ALOHA is S = G x e-G.
The maximum throughput Smax is 0.368, when G = 1.
In other words, if a frame is generated during one frame transmission time, then 36.8
percent of these frames reach their destination successfully. This result can be expected
because the vulnerable time is equal to the frame transmission time.
Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp .This is the time needed for
a signal to propagate from one end of the medium to the other. When a station sends a
frame, and any other station tries to send a frame during this time, a collision will
result. But if the first bit of the frame reaches the end of the medium, every station will
already have heard the bit and will refrain from sending.
I-Persistent
The I-persistent method is simple and straightforward. In this method,
after the station finds the line idle, it sends its frame immediately (with probability I).
This method has the highest chance of collision because two or more stations may find
the line idle and send their frames immediately.
Nonpersistent
In the nonpersistent method, a station that has a frame to send
senses the line. If the line is idle, it sends immediately. If the line is not idle, it waits a
random amount of time and then senses the line again. The nonpersistent approach
reduces the chance of collision because it is unlikely that two or more stations will wait
the same amount of time and retry to send simultaneously.
p-Persistent
The p-persistent method is used if the channel has time slots with a slot duration equal to
or greater than the maximum propagation time. The p-persistent approach combines the
advantages of the other two strategies. It reduces the chance of collision and improves
efficiency. In this method, after the station finds the line idle it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 - p, the station waits for the beginning of the next time slot
and checks the line again.
a. If the line is idle, it goes to step 1.
b. If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.
If there are N stations in the system, there are exactly N reservation minislots in the
reservation frame. Each minislot belongs to a station. When a station needs to send a
data frame, it makes a reservation in its own mini slot. The stations that have made
reservations can send their data frames after the reservation frame. a situation with five
stations and a five-minislot reservation frame. In the first interval, only stations 1, 3, and 4
have made reservations. In the second interval, only station 1 has made a reservation.
Binary Countdown
This protocol overcomes the overhead of 1 bit per station of the bit – map protocol. Here,
binary addresses of equal lengths are assigned to each station. For example, if there are 6
stations, they may be assigned the binary addresses 001, 010, 011, 100, 101 and 110. All
stations wanting to communicate broadcast their addresses. The station with higher address
gets the higher priority for transmitting.
Preamble
The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating
Os and Is that alerts the receiving system to the coming frame and enables it to
synchronize its input timing. The pattern provides only an alert and a timing pulse.
The 56-bit pattern allows the stations to miss some bits at the beginning of the
frame. The preamble is actually added at the physical layer and is not (formally)
part of the frame.
Length or type.
This field is defined as a type field or length field. The original Ethernet used this field as
the type field to define the upper-layer protocol using the MAC frame. The IEEE standard
used it as the length field to define the number of bytes in the data field
Data.
This field carries data encapsulated from the upper-layer protocols. It is a minimum of 46
and a maximum of 1500 bytes, as we will see later.
CRC. The last field contains error detection information.
Addressing
Each station on an Ethernet network (such as a PC, workstation, or printer) has its own
network interface card (NIC). The NIC fits inside the station and provides the station
with a 6-byte physical address. As shown in Figure 13.6, the Ethernet address is 6 bytes
(48 bits), nonnally written in hexadecimal notation, with a colon between the bytes.
The least significant bit of the first byte defines the type of address.
If the bit is 0, the address is unicast; otherwise, it is multicast
Fast Ethernet
The Fast Ethernet standard (IEEE 802.3u) has been established for Ethernet networks that
need higher transmission speeds. This standard raises the Ethernet speed limit from 10
Mbps to 100 Mbps with only minimal changes to the existing cable structure. Fast Ethernet
provides faster throughput for video, multimedia, graphics, Internet surfing and stronger
error detection and correction.There are three types of Fast Ethernet: 100BASE-TX for use
with level 5 UTP cable; 100BASE-FX for use with fiber-optic cable; and 100BASE-T4
which utilizes an extra two wires for use with level 3 UTP cable.
Gigabit Ethernet
Gigabit Ethernet was developed to meet the need for faster communication networks with
applications such as multimedia and Voice over IP (VoIP). Also known as “gigabit-
Ethernet-over-copper” or 1000Base-T, GigE is a version of Ethernet that runs at speeds 10
times faster than 100Base-T. It is defined in the IEEE 802.3 standard and is currently used
as an enterprise backbone. Existing Ethernet LANs with 10 and 100 Mbps cards can feed
into a Gigabit Ethernet backbone to interconnect high performance switches, routers and
servers. The most important differences between Gigabit Ethernet and Fast Ethernet include
the additional support of full duplex operation in the MAC layer and the data rates.
One of the most common protocols for point-to-point access is the Point-to-Point Protocol
(PPP). Today, millions of Internet users who need to connect their home computers to the
server of an Internet service provider use PPP. To control and manage the transfer of data,
there is a need for a point-to-point protocol at the data-link layer. PPP is by far the most
common.PPP is a byte-oriented protocol