Transport Layer
Data Communictions and Networking1/
Conten
ts
Introduction to Transport Layer
Process-to-Process Delivery
Protocols in Transport Layer
TCP - Transmission Control Protocol
UDP - User DataGram Protocol
Congestion Control
Data Communictions and Networking2/
Introduction to Transport
Layer
It is located between the Application layer and the Network
layer.
It provides a process-to-process communication between two
application layers, one at the local host and the other at the
remote host.
Communication is provided using a logical connection, which
means that the two application layers, which can be located in
different parts of the globe, assume that there is an imaginary
direct connection through which they can send and receive
messages.
All modules and procedures pertaining to transportation of
data or data stream are categorized into this layer.
It offers peer-to-peer and end-to-end connection between
two processes on remote hosts.
It takes data from upper layer (i.e. Application layer) and then
breaks it into smaller size segments, numbers each byte, and
hands over to lower layer (Network Layer) for delivery.
Data Communictions and Networking3/
Exampl
e..
Data Communictions and Networking4/
Transport layer Functionalities
Functionalities:
It facilitates the communicating hosts to carry on a
conversation.
It provides an interface for the users to the underlying
network.
It can provide for a reliable connection. It can also carry out
error checking, flow control, and verification.
Protocols used in Transport Layer
Transmission Control Protocol (TCP): It is a reliable
connection-oriented protocol that transmits data from the
source to the destination machine without any error.
User Datagram Protocol (UDP): It is a message-oriented
protocol that provides a simple unreliable, connectionless,
unacknowledged service.
Stream Control Transmission Protocol (SCTP): It combines the
features of both TCP and UDP. It is message oriented like the
Data Communictions and Networking5/
Process-to-Process
Delivery
The Data Link Layer is responsible for delivery of
frames between two neighboring nodes over a link.
This is called Node-to-Node delivery.
The Network layer is responsible for delivery of
datagrams between two hosts. This is called Host-to-
Host delivery.
Communication on the Internet is not defined as the
exchange of data between two nodes or between two
hosts.
Real communication takes place between two processes.
So that we need Process-to-Process delivery.
However, at any moment, several processes may be
running on the source host and several on the
destination host.
To complete the delivery, we need a mechanism to
deliver data from one of these processes running on the
source host to the corresponding process
Data Communictions running on the
and Networking6/
Process-to-Process Delivery
The transport layer is responsible for process-to-process
delivery-the delivery of a packet, part of a message,
from one process to another
Data Communictions and Networking7/
Client/Server Paradigm
There are several ways to achieve process-to-process
communication; the most common one is through the
client/server paradigm.
A process on the local host, called a Client, needs services
from a process usually on the remote host, called a
Server.
Both processes (Client/Server) have the same name. For
example, to get the day and time from a remote machine,
we need a Daytime client process running on the local host
and a Daytime server process running on a remote machine.
A remote computer can run several server programs at the
same time, just as local computers can run one or more
client programs at the same time.
For Communication, need to define the following:
Local Host & Local Process
Remote Host & Remote Process
Data Communictions and Networking8/
Addressin
g
Whenever we need to deliver something to one specific
destination among many, we need an address
At the DLL, need a MAC address to choose one node among
several nodes if the connection is not point-to-point. A frame
in the data link layer needs a destination MAC address for
delivery and a source address for the next node’s reply.
At the NL, need an IP address to choose one host among
millions. A datagram in the network layer needs a
destination IP address for delivery and a source IP address
for the destination’s reply.
At the TL, we need a transport layer address, called a port
number, to choose among multiple processes running on the
destination host. The destination port number is needed for
delivery; the source port number is needed for the reply.
Data Communictions and Networking9/
Addressing
Data Communictions and Networking10
IP Address and Port
Numbers
In Internet, port numbers are 16-bit integers between 0 and
65,535.
The client program defines itself with a port number,
chosen randomly by the transport layer software running
on the client host. This is the ephemeral port number.
The server process also defines its port number. However,
cannot be chosen randomly.
If the computer at the server site runs a server process and
assigns a random number as the port number, the process
at the client site that wants to access that server and use
its services will not know the port number.
Every client process knows the well-known port
number of the corresponding server process.
Example : The Daytime client process, can use an
ephemeral (temporary) port number 52,000 to identify
itself, the Daytime server process must use the well-known
(permanent) port number 13.
Both the IP addresses and port numbers play different
Data Communictions and Networking11
IP Address and Port Numbers
Data Communictions and Networking12
IANA Ranges
The lANA (Internet Assigned Number Authority) has
divided the port numbers into three ranges:
Well-known ports: The ports ranging from 0 to 1023
are assigned and controlled by lANA.
Registered ports: The ports ranging from 1024 to
49,151 are not assigned or controlled by lANA. They
can only be registered with lANA to prevent
duplication.
Dynamic ports: The ports ranging from 49,152 to 65,535
are neither controlled nor registered. They can be used
by any process. These are the ephemeral ports.
Data Communictions and Networking13
Socket Address
Process-to-Process delivery needs two identifiers, IP address
and the Port number, at each end to make a connection.
The combination of an IP address and a port number is
called a socket address.
The client socket address defines the client process uniquely
just as the server socket address defines the server process
uniquely.
Data Communictions and Networking14
Multiplexing and
DeMultiplexing
The addressing mechanism allows multiplexing and
demultiplexing by the transport layer.
Multiplexing:
At the sender site, there may be several processes that need
to send packets.
However, there is only one transport layer protocol at any time.
This is a many-to-one relationship and requires multiplexing.
The protocol accepts messages from different
processes, differentiated by their assigned port
numbers.
After adding the header, the transport layer passes the packet
to the network layer.
DeMultiplexing:
At the receiver site, the relationship is one-to-many and
requires demultiplexing.
The transport layer receives datagrams from the network layer.
After error checking and dropping of the header, the transport
layer delivers each message to the appropriate process based
on the port number.
Data Communictions and Networking15
Multilexing and DeMultiplexing
Data Communictions and Networking16
Error Control in Transport
Layer
Data Communictions and Networking17
Encapsulation and Decapsulation
Data Communictions and Networking18
Connectionless Versus Connection-Oriented vic
Service e
A transport layer protocol can either be
connectionless or connection-oriented.
Connectionless Service: In a connectionless service, the
packets are sent from one party to another with no need for
connection establishment or connection release. The
packets are not numbered; they may be delayed or lost or
may arrive out of sequence. There is no acknowledgment
either. UDP, is connectionless.
Connection-Oriented Service: In a connection-oriented
service, a connection is first established between the
sender and the receiver. Data are transferred. At the end,
the connection is released. The TCP and SCTP are
connection-oriented protocols.
Data Communictions and Networking19
Protocols in Transport
Layer
Data Communictions and Networking20
Data Communictions and
Networking
TCP - Transmission Control Protocol
Data Communictions and Networking21
Transmission Control Protocol
It is one of the most important protocols of Internet Protocols
suite. It is most widely used protocol for data transmission in
communication network such as internet.
Features :
TCP is reliable protocol (Acknowledgement(+ve/-ve)).
TCP ensures that the data reaches intended destination in
the same order it was sent.
TCP is connection oriented, provides error-checking,
recovery mechanism and end-to-end
communication.
TCP provides flow control and quality of
service.
TCP operates in Client/Server point-to-point
mode.
TCP provides full duplex server, i.e. it can perform roles
of both receiver and sender.Data Communictions and Networking22
TCP Header
Data Communictions and Networking23
TCP Header
Data Communictions and Networking24
TCP Header....
The length of TCP header is minimum 20 bytes long and
maximum 60 bytes
Source Port(16-bits) - It identifies source port of
theapplication process on the sending device.
Destination Port(16-bits) - It identifies destination port
of the application process on the receiving device.
Sequence Number(32-bits) - Sequence number of data
bytes of a segment in a session.
Acknowledgement Number(32-bits) - When ACK flag is
set, this number contains the next sequence number of
the data byte expected and works as acknowledgement
of the previous data received.
Header length. This 4-bit field indicates the number of 4-
byte words in the TCP header. The length of the header can
be between 20 and 60 bytes. Therefore, the value of this
field is always between 5 (5 × 4 = 20) and 15 (15 × 4 =
60)..
Reserved(3-bits) - Reserved Data forCommunictions
futureanduse and all are set
Networking25
TCP Header...
Flags(1-bit each) :
URG- It indicates that Urgent Pointer field has significant data
and should be processed.
ACK- It indicates that Acknowledgement field has
significance. If ACK is cleared to 0, it indicates that packet
does not contain any acknowledgement.
PSH- When set, it is a request to the receiving station to
PUSH data (as soon as it comes) to the receiving
application without buffering it.
RST- Reset flag has the following features:
• It is used to refuse an incoming
connection.
• It is used to reject a segment.
• It is used to restart a connection.
SYN- This flag is used to set up a
connection between hosts.
FIN- This flag is used to release a connection and no more data
is exchanged thereafter. Because packets with SYN and FIN
flags have sequence numbers, they areandprocessed
Data Communictions Networking26 in correct
TCP Header...
Windows Size- This field is used for flow control between two
stations and indicates the amount of buffer (in bytes) the
receiver has allocated for a segment,
Checksum- This field contains the checksum of Header, Data
and Pseudo Headers.
Urgent Pointer- It points to the urgent data byte if URG flag is
set to 1
Options- It facilitates additional options which are not
covered by the regular header. Option field is always
described in 32-bit words. If this field contains data less than
32-bit, padding is used to cover the remaining bits to reach
32-bit boundary.
Addressing: TCP communication between two remote hosts is
done by means of port numbers (TSAPs).
Ports numbers can range from 0 – 65535 which are divided as:
System Ports (0 – 1023)
User Ports ( 1024 – 49151)
Data Communictions and Networking27
TCP Header...
The following is part of a TCP header dump
(contents) in hexadecimal format.
a. What is the source port number?
b. What is the destination port number?
c. What is the sequence number?
d. What is the acknowledgment number?
e. What is the length of the header?
f. What is the type of the segment?
g. What is the window size?
Data Communictions and Networking28
a. The source port number is 0x0532 (1330 in decimal).
b. The destination port number is 0x0017 (23 in decimal).
c. The sequence number is 0x00000001 (1 in decimal).
d. The acknowledgment number is 0x00000000 (0 in
decimal).
e. The header length is 0x5 (5 in decimal). There are 5 × 4
or 20 bytes of header.
f. The control field is 0x002. This indicates a SYN segment
used for connection
establishment.
g. The window size field is 0x07FF (2047 in decimal).
TCP Connection Establishment
It uses Three-way handshaking is used for connection
management.
A SYNsegment
cannot carry
data, but it
consumes one
sequence
number.
A SYN +
ACKseqgment
can’t carry data,
but does
consume one
sequence
number.
AnACKsegment,
if carrying no
data,
consumes no
Data Communictions and Networking30
Data Transfer and Connection
Termination
Data Connection
transfer Termination
TheFINsegment consumes one sequence number if it
doesn’t carry data.
Data Communictions and Networking31
Connection Termination
• Most implementations today allow two options for connection termination:
• three-way handshaking
• four-way handshaking with a half-close option.
• Three-Way Handshaking
• The FIN segment consumes one sequence number if it does not
carry data
• Half-Close
• In TCP, one end can stop sending data while still receiving data.
This is called a half-close. Either the server or the client can issue a
half-close request.
• The data transfer from the client to the server stops. The client
half-closes the connection by sending a FIN segment.
• The server accepts the half-close by sending the ACK segment. The
server, however, can still send data.
• When the server has sent all of the processed data, it sends a FIN
segment, which is acknowledged by an ACK from the client
Data Communictions and Networking32
Connection Termination – 3 way
handshaking
Data Communictions and Networking33
Connection Termination – 4 way
handshaking
Half close
Data Communictions and Networking34
TCP Well Defined Ports
Data Communictions and Networking35
Flow Control
- As discussed before, flow control balances the
rate a producer creates data with the rate a
consumer can use the data
- We assume that the logical channel between the
sending and receiving TCP is error-free.
Data flow and flow control
feedbacks in TCP
An example of
flow control
Stream Delivary in TCP
Data Communictions and Networking39
Sending and Receiving
Buffers
Data Communictions and Networking40
TCP Segments
Data Communictions and Networking41
Error Control & Flow
Control
TCP uses port numbers to know what application process it
needs to handover the data segment.
Along with that, it uses sequence numbers to synchronize
itself with the remote host.
All data segments are sent and received with sequence
numbers.
The Sender knows which last data segment was
received by the Receiver when it gets ACK.
The Receiver knows about the last segment sent by the
Sender by referring to the sequence number of recently
received packet.
If the sequence number of a segment recently received does
not match with the sequence number the receiver was
expecting, then it is discarded and NACK is sent back.
If two segments arrive with the same sequence number,
the TCP timestamp value is compared
Data Communictions to make a
and Networking42
Multiplexing in TCP
The technique to combine two or more data streams in one
session is called Multiplexing.
When a TCP client initializes a connection with Server, it
always refers to a well-defined port number which
indicates the application process. The client itself uses a
randomly generated port number from private port
number pools.
Using TCP Multiplexing, a client can communicate with a
number of different application process in a single session.
For example, a client requests a web page which in turn
contains different types of data (HTTP, SMTP, FTP etc.) the
TCP session timeout is increased and the session is kept
open for longer time so that the three-way handshake
overhead can be avoided.
This enables the client system to receive multiple
connection over single virtual connection. These virtual
Data Communictions and Networking43
Congestion Control
When large amount of data is fed to system which is not
capable of handling it, congestion occurs.
TCP controls congestion by means of Window mechanism.
TCP sets a window size telling the other end how much
data segment to send.
TCP may use three algorithms for congestion control:
Additive increase, Multiplicative
Decrease Slow Start
Timeout React
Data Communictions and Networking44
Timer Management
TCP uses different types of timer to control and manage
various tasks:
Keep-alive timer:
This timer is used to check the integrity and validity of a
connection. When keep-alive time expires, the host sends
a probe to check if the connection still exists.
Timed-Wait:
After releasing a connection, either of the hosts
waits for a Timed-Wait time to terminate the
connection completely.
This is in order to make sure that the other end has
received the acknowledgement of its connection
termination request.
Timed-out can be a maximum of 240 seconds (4
minutes).
Data Communictions and Networking45
Timer Management
TCP uses different types of timer to control and management
various tasks:
Retransmission timer:
This timer maintains stateful session of data sent.
If the acknowledgement of sent data does not receive
within the Retransmission time, the data segment is sent
again.
Persist timer:
TCP session can be paused by either host by sending Window
Size 0. To resume the session a host needs to send Window
Size with some larger value.
If this segment never reaches the other end, both ends may
wait for each other for infinite time.
When the Persist timer expires, the host re-sends its window
size to let the other end know.
Persist Timer helps avoid deadlocks in communication.
Data Communictions and Networking46
Example
What is the value of the receiver window (rwnd) for host
A if the receiver, host B, has a buffer size of 5000 bytes
and 1000 bytes of received and unprocessed data?
Solution
The value of rwnd = 5000 − 1000 = 4000. Host B can
receive only 4000 bytes of data before overflowing its
buffer. Host B advertises this value in its next segment to
A.
Example
What is the size of the window for host A if the value of
rwnd is 3000 bytes and the value of cwnd is 3500 bytes?
Solution
The size of the window is the smaller of rwnd and cwnd,
which is 3000 bytes.
Crash Recovery
TCP is very reliable protocol. It provides sequence
number to each of byte sent in segment.
It provides the feedback mechanism i.e. when a host
receives a packet, it is bound to ACK that packet
having the next sequence number expected (if it is
not the last segment).
When a TCP Server crashes mid-way communication
and re-starts its process it sends TPDU(Transaction
Protocol Data Unit) broadcast to all its hosts.
The hosts can then send the last data segment which
was never unacknowledged and carry onwards.
Data Communictions and Networking49
Data Communictions and
Networking
UDP - User Datagram Protocol
Data Communictions and Networking50
User Datagram
Protocol
The User Datagram Protocol (UDP) is simplest Transport
Layer communication protocol available of the TCP/IP
protocol suite.
It involves minimum amount of communication
mechanism.
UDP is said to be an unreliable transport protocol but it
uses IP services which provides best effort delivery
mechanism.
In UDP, the receiver does not generate an
acknowledgement of packet received and in turn, the
sender does not wait for any acknowledgement of
packet sent.
This shortcoming makes this protocol unreliable as well as
easier on processing.
Data Communictions and Networking51
Requirement of UDP
A question may arise, why do we need an unreliable
protocol to transport the data?
We deploy UDP where the acknowledgement packets
share significant amount of bandwidth along with the
actual data.
For Example, in case of video streaming, thousands of
packets are forwarded towards its users. Acknowledging all
the packets is troublesome and may contain huge amount
of bandwidth wastage.
The best delivery mechanism of underlying IP protocol
ensures best efforts to deliver its packets, but even if some
packets in video streaming get lost, the impact is not
calamitous and can be ignored easily.
Loss of few packets in video and voice traffic sometimes
goes unnoticed
Data Communictions and Networking52
Features of UDP
UDP is used when acknowledgement of data does not
hold any significance.
UDP is good protocol for data flowing in one direction.
UDP is simple and suitable for query based
communications. UDP is not connection oriented
(Connectionless).
UDP does not provide congestion control
mechanism. UDP does not guarantee ordered
delivery of data.
UDP is stateless.
UDP is suitable protocol for streaming applications such
Data Communictions and Networking53
User Datagram format
Source Port- is 16 bits information, used to identify the
source port. Destination Port- is 16 bits information, used
identify application level service on destination machine.
Length- specifies the length of UDP packet (including
header). It is 16-bits field and minimum value is 8-byte, i.e.
the size of UDP header itself.
Checksum- It stores the checksum value generated by the
sender before sending. IPv4 has this field as optional so when
checksum field does not contain any value it is made 0 and
all its bits are set to zero. Data Communictions and Networking54
UDP Applications
UDP is suitable for a process that requires simple request-
response communication with little concern for flow and
error control. It is not usually used for a process such as
FTP that needs to send bulk data.
UDP is suitable for a process with internal flow and error-
control mechanisms. For example, the Trivial File Transfer
Protocol (TFTP) process includes flow and error control. It
can easily use UDP.
UDP is a suitable transport protocol for multicasting.
Multicasting capability is embedded in the UDP software
but not in the TCP software.
UDP is used for management processes such as SNMP.
UDP is used for some route updating protocols such as
Routing Information Protocol (RIP).
UDP is normally used for interactive real-time applications
that cannot tolerate uneven delay between sections of a
received message Data Communictions and Networking55
UDP Services
ConnectionLess Service:
UDP provides a connectionless service. This means that
each user datagram sent by UDP is an independent
datagram.
There is no relationship between the different user datagrams
even if they are coming from the same source process and
going to the same destination program.
The user datagrams are not numbered.
Flow Control:
UDP is a very simple protocol. There is no flow control, and
hence no window mechanism.
The receiver may overflow with incoming messages. The lack
of flow control means that the process using UDP should
provide for this service, if needed.
Data Communictions and Networking56
UDP Services...
Error Control
There is no error control mechanism in UDP except for the
checksum. This means that the sender does not know if
a message has been lost or duplicated.
When the receiver detects an error through the
checksum, the user datagram is silently discarded.
The lack of error control means that the process using
UDP should provide for this service, if needed.
Encapsulation and Decapsulation
To send a message from one process to another, the UDP
protocol encapsulates and decapsulates messages.
Check Sum:
UDP checksum calculation includes three sections: a
pseudoheader, the UDP header, and the data coming
from the application layer.
Data Communictions and Networking57
Pseudo Header for Checksum
Calculation
Checksum..
..!
Data Communictions and Networking58
Pseudo Header for Checksum
Calculation
• The pseudo header is not actually present in the UDP
packet itself but is constructed by including certain
fields from the IP header.
• The purpose of the pseudo header is to provide
additional information for error detection in the UDP
packet.
• It includes the source IP address, destination IP address,
protocol field (set to UDP), UDP length, and a
placeholder for the UDP checksum field.
• By incorporating these fields from the IP header, the
pseudo header ensures that changes in the IP header
information can be detected when calculating the UDP
checksum.
Data Communictions and Networking59
Checksum calculation for Simple
UDP
Checksum
Checksum.. Calculation
..!
Data Communictions and Networking60
TCP versus UDP
TCP UDP
Connection-oriented Connectionless protocol.
protocol.
Messages contain packets
Reads data as streams of
bytes, and the message is that were sent one by
transmitted to segment one. It also checks for
boundaries. integrity at the arrival
time.
TCP messages make their It is not connection-based,
way so one program can
across the internet from send lots of packets
one computer to to another.
another.
UDP protocol has no fixed
TCP rearranges data order
packets in the specific
because all packets
order.
are independent of
each other.
UDP is faster as error
The speed for TCP is slower. recovery
Data Communictions and Networking61
TCP versus UDP
TCP UDP
TCP is heavy-weight.
UDP is lightweight.
TCP needs three packets to
No tracking
set up a socket connection
connections,
before
Ordering of
any user data can be sent.
messages, etc.
It does error checking and UDP performs error
also makes error recovery. checking,
but it discards erroneous
packets.
Acknowledgment segments No Acknowledgment
segments
Using handshake protocol No handshake
like SYN, SYN-ACK, ACK (so connectionless protocol)
TCP is reliable as it The delivery of data to the
guarantees destination can’t be
delivery of data to the guaranteed in UDP.
destination router.
It offers extensive error
UDP has just a single
Data Communictions and Networking62
Data Communictions and
Networking
Congestion Control and Quality of Service
Data Communictions and Networking63
Congestion Control and QoS
Congestion control and Quality of Service(QoS) are two
issues so closely bound together that improving one means
improving the other and ignoring one usually means
ignoring the other.
The techniques to prevent or eliminate congestion also
improve the QoS in a network.
Data Traffic: is the amount of data moving across a network at
a given point of time.
In Congestion Control - Avoid Traffic Congestion
In QoS : creates an appropriate environment for the traffic.
Data Communictions and Networking64
Traffic Descriptors -I
Traffic descriptors are qualitative values that represent
a data flow
Average Rate: is the number of bits sent during a period
of time, divided by the number of seconds in that period.
It indicates the average bandwidth needed by the traffic.
Amount_of
Average_Data_Rate : Tim
_Data e
Data Communictions and Networking65
Traffic Descriptors -II
Peak Rate: defines the maximum data rate of the traffic.
Maximum Burst Rate: refers to the maximum length of time
the traffic is generated at the peak rate.
Effective Bandwidth: is the bandwidth that the network
needs to allocate for the flow of traffic.
It is a function of three values:
Average data rate
Peak data rate, and
Maximum burst size.
Data Communictions and Networking66
Trafic
Profiles
Constant-bit-rate(CBR),
or a fixed-rate - Data
rate that doesn’t
change.
Variable-bit-rate(VBR) -
the rate of the data
flow changes in time,
with the changes
smooth instead of
sudden and sharp.
Bursty Data rate- the
data rate changes
suddenly in a very
short time.
Data Communictions and Networking67
Congestion
It is an Important issue in a packet-switched network.
Congestion in a network may occur
if the load on the network, i.e., The number of packets
sent to the network is greater than the capacity of the
network
The number of packets a network can handle.
Congestion Control...
It refers to the mechanisms and techniques to control the
congestion and keep the load below the capacity.
Data Communictions and Networking68
Queues in
Router/Switches
Congestion in a network occurs because routers and
switches have
queues-buffers that hold the packets before and after
processing.
Data Communictions and Networking69
Network
Performance
Congestion control involves 2 factors to measure the Network
Performance :
Delay: refers to the amount of time it takes for a packet to go
from point A to point B
Throughput: refers to how much data (No. of Packets)
can be transferred from source to destination within a
given timeframe
Data Communictions and Networking70
Congestion Control
Congestion Control...
It refers to the mechanisms and techniques to control the
congestion and keep the load below the capacity.
In general, we can divide congestion control mechanisms
into two broad categories: open-loop congestion control
(prevention) and closed-loop congestion control (removal).
Data Communictions and Networking71
Open-loop Congestion
Control
Re-transmission Policy:
If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted.
Re-transmission may increase congestion in the network.
However, a good re-transmission policy can prevent
congestion.
Window Policy:
The type of window at the sender may also affect congestion.
The Selective Repeat window is better than the Go-Back-N
window
for congestion control.
In the Go-Back-N window, when the timer for a packet times
out, several packets may be resent, although some may have
arrived safe and sound at the receiver. This duplication may
makethe congestion worse.
The Selective Repeat window, on the other hand, tries to
send the specific packets that have been lost or corrupted.
Data Communictions and Networking72
Open-loop Congestion Control
Acknowledgment Policy:
It imposed by the receiver, may also affect congestion.
If the receiver does not acknowledge every packet it receives, it
may slow down the sender and help prevent congestion.
A receiver may decide to acknowledge only N packets at a time.
We need to know that the acknowledgments are also part of
the load in a network. Sending fewer acknowledgments means
imposing less load on the network.
Discarding Policy:
A good discarding policy by the routers may prevent congestion
and at the same time may not harm the integrity of the
transmission.
For example, in audio transmission, if the policy is to discard
less sensitive packets when congestion is likely to happen, the
quality of sound is still preserved and congestion is prevented
or alleviated.
Data Communictions and Networking73
Open-loop Congestion Control
Admission Policy:
It is a QoS mechanism can also prevent congestion in
virtual-circuit networks.
Switches in a flow first check the resource requirement
of a flow before admitting it to the network.
A router can deny establishing a virtual-circuit
connection if there is congestion in the network or if
there is a possibility of
future congestion.
Data Communictions and Networking74
Closed-loop Congestion
Control
Backpressure:
It refers to a congestion control mechanism in which a
congested node stops receiving data from the immediate
node or nodes.
This may cause the upstream node or nodes to become
congested, and they, in turn, reject data from their upstream
nodes or nodes.
Data Communictions and Networking75
Closed-loop Congestion Control
Choke Packets:
It is a packet sent by a node to the source to inform it of
congestion. When a router in the Internet is overwhelmed
with IP datagrams, it may discard some of them; but it
informs the source host, using a source quench
ICMP(Internet Control Message Protocol) message.
The warning message goes directly to the source station;
the intermediate routers, and does not take any action.
Data Communictions and Networking76
Difference b/w Choke Packets and Backpressure
Choke
Packets:
The warning is from the router, which has encountered
congestion, to the source station directly. The inter-mediate
nodes through which the packet has traveled are not
warned.
Back Pressure:
The warning is from one node to its upstream node,
although the warning may eventually reach the source
station.
Data Communictions and Networking77
Closed-loop Congestion Control
Implicit Signaling:
There is no communication b/w the congested node and
source. The source guesses that there is a congestion
somewhere in the network from other symptoms.
For Eg: when a source sends several packets and no
acknowledgment for a while, one assumption is that the
network is congested.
The delay in receiving an acknowledgment is interpreted
as congestion in the network; the source should slow down.
Explicit Signaling:
The node that experiences congestion can explicitly send a
signal to the source or destination.
In this method, however, is different from the choke packet
method.
In the choke packet method, a separate packet is used for this
purpose; In the explicit signaling method, the signal is included
in the packets that carry data.
It can occur in either the forward or the backward direction.
Data Communictions and Networking78
Closed-loop Congestion
Control
Backward Signaling:
A bit can be set in a packet moving in the direction
opposite to the congestion.
This bit can warn the source that there is congestion
and that it needs to slow down to avoid the discarding
of packets.
Forward Signaling:
A bit can be set in a packet moving in the
direction of the congestion.
This bit can warn the destination that there is
congestion.
The receiver in this case can use policies, such as slowing
down the acknowledgments, to alleviate the congestion
Data Communictions and Networking79
Example - Congestion Control in TCP
TCP General Policy
Congestion Control is based on three
phases:
Slow start
Congestion avoidance and
Congestion detection.
Note: The size of the congestion window increases exponentially
until it reaches a threshold.
Slow-start phase,
The sender starts with a very slow rate of transmission, but
increases the rate rapidly to reach a threshold.
When the threshold is reached, the data rate is reduced to avoid
congestion.
Finally if congestion is detected, the sender goes back to the
slow-start or congestion avoidance phase based on how
the congestion is detected.
Data Communictions and Networking80
Slowstart - Exponential Increase
Figure:Slow Start
Phase
Data Communictions and Networking81
Congestion Avoidance - Additive
Increase
Congestion Avoidance: Additive Increase:
It start with the Slow-start algorithm, the size of the
congestion window increases exponentially.
To avoid congestion before it happens, one must slow
down this exponential growth.
TCP defines another algorithm called congestion
avoidance, which undergoes an additive increase
instead of an exponential one.
When the size of the congestion window reaches the slow-
start threshold, the slow-start phase stops and the
additive phase begins.
In this algorithm, each time the whole window of
segments is acknowledged (one round), the size of the
congestion window is increased by 1
Data Communictions and Networking82
Congestion Avoidance - Additive
Increase
Figure:Congestion
Avoidance
Data Communictions and Networking83
Congestion Detection - Multiplicative
Decrease
Congestion Detection: Multiplicative Decrease:
If congestion occurs, the congestion window size must be
decreased.
The only way the sender can guess that congestion has
occurred is by the need to retransmit a segment. However,
retransmission can occur in one of two cases:
When a timer times out :
When three ACKs are received.
In both cases, the size of the threshold is dropped to one-
half, a multiplicative decrease.
Implementation Reaction to Congestion Detection
If detection is by time-out, a new slow-start phase starts.
If detection is by three ACKs, a new congestion avoidance phase
starts.
Data Communictions and Networking84
Congestion Detection - Multiplicative Decrease
TCP implementations have two reactions:
1
If a time-out occurs, there is a stronger possibility of
congestion; a segment has probably been dropped in the
network, and there is no news about the sent segments.
It sets the value of the threshold to one-half of the current
window size.
It sets cwnd to the size of one
segment.
2
It starts the slow-start phase again.
If three ACKs are received, there is a weaker possibility of
congestion; a segment may have been dropped, but some
segments after that may have arrived safely since three
ACKs are received.
This is called fast transmission and fast recovery. In this
case, TCP
has a weaker reaction:
It sets the value of the threshold to one-half of the current
window size.
It sets cwnd to the value of the threshold (some
implementations add threeData segment sizes
Communictions to the
and Networking85
Congestion Control
Slow start, exponential increase
Data Communictions and Networking86
Congestion Control
Congestion avoidance, additive
increase
Data Communictions and Networking87
Congestion Control –TCP Taho
Data Communictions and Networking88
Congestion Control – TCP Reno
Data Communictions and Networking89
Additive increase, multiplicative decrease
(AIMD)
Data Communictions and Networking90
Quality of Service
QoS is an Internetworking Issue
It refers to a set of techniques & mechanisms that guarantee the
performance of the n/w to deliver predictable service to an
application program.
Techniques to Improve QoS
Scheduling Traffic
Shaping
Resource Reservation
Admission Control
Data Communictions and Networking91
QoS - Scheduling -I
Packets from different flows arrive at a switch/router for
processing.
A good scheduling technique treats the different flows in a
fair and appropriate manner.
Several scheduling techniques are designed to improve
the QoS. We discuss three of them here:
FIFO queuing:
Data Communictions and Networking92
QoS - Scheduling -FIFO
Data Communictions and Networking93
QoS - Scheduling –Priority
Scheduling
Data Communictions and Networking94
QoS - Scheduling –Weighted Fair
Queuing
• each class may receive a small amount of time in
each time period
• if the throughput for the router is R, the class with the
highest priority may have the throughput of R/2, the
middle class may have the throughput of R/3, and the
class with the lowest priority may have the
throughput of R/6
Data Communictions and Networking95
QoS - Traffic Shapping -I
It is a mechanism to control the amount and the rate of the
traffic sent to the network.
Two techniques can shape traffic: Leaky bucket and Token
1 bucket.
Leaky bucket :
Implementation of Leaky
Bucket
Data Communictions and Networking96
QoS - Traffic Shapping -II
2 Token
bucket :
Data Communictions and Networking97
QoS - Traffic
Shapping
Leaky Bucket
It shapes bursty traffic into fixed-rate traffic by averaging the
data rate. It may drop the packets if the bucket is full.
Token Bucket
It allows bursty traffic at a regulated maximum rate.
Conclusion
The two techniques can be combined to credit an idle host
and at the same time regulate the traffic.
The leaky bucket is applied after the token bucket;
The rate of the leaky bucket needs to be higher than
the rate of tokens dropped in the bucket.
Data Communictions and Networking98
Thank
s
Data Communictions and Networking99