0% found this document useful (0 votes)
23 views36 pages

A Comparative Study in QOS Protocols

This document provides an overview of quality of service (QoS) routing protocols. It discusses various QoS routing metrics and classification schemes. Several existing QoS routing protocols are examined, including CEDAR, multipath routing protocol, GAMAN, PLBQR, QMRPD, and QOLSR. The document also outlines challenges in providing QoS in networks, such as unreliable channels, mobility, and limited power. The goal of the paper is to illustrate different queuing disciplines and their role in QoS research.

Uploaded by

Zohaib Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views36 pages

A Comparative Study in QOS Protocols

This document provides an overview of quality of service (QoS) routing protocols. It discusses various QoS routing metrics and classification schemes. Several existing QoS routing protocols are examined, including CEDAR, multipath routing protocol, GAMAN, PLBQR, QMRPD, and QOLSR. The document also outlines challenges in providing QoS in networks, such as unreliable channels, mobility, and limited power. The goal of the paper is to illustrate different queuing disciplines and their role in QoS research.

Uploaded by

Zohaib Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

1

A comparative study in QOS protocols

Name ()

Subject

Department

Supervisor
2

DECLARATION

I hereby announce that this dissertation is my own original


thesis, which I am submitting for the (ABC) at the University of
ABC. This dissertation has never before been submitted for a
degree. In the thesis, all sources of information have been
acknowledged.

Name of student signature


3

Table of Contents
Introduction................................................................................................................................................6
Characteristic of QoS..................................................................................................................................8
Classification of QoS...................................................................................................................................9
Based on the Quality of Service (QoS) strategy used.............................................................................9
a) Based on how the routing protocol interacts with the QoS provisioning process....................9
Based on how the Routing Protocol and the MAC Protocol communicate.....................................10
In accordance with the routing information update process in use.................................................11
Evaluation metric for QoS protocols........................................................................................................12
Criteria of QoS Routing protocol classification.........................................................................................13
Single constrained vs. multi constrained metrics.....................................................................................14
Hard QoS vs. Soft QoS...............................................................................................................................14
QOS protocols...........................................................................................................................................15
First-in, First-out (FIFO) Queuing..........................................................................................................15
Priority Queuing (PQ)............................................................................................................................16
Custom queuing (CQ)............................................................................................................................19
Weighted fair queuing (WFQ)...............................................................................................................20
Deficit Weighted Round Robin (DWRR)...............................................................................................23
DWRR implementation and application...............................................................................................25
Modified deficit Round Robin Queuing Discipline (MDRR)..................................................................25
Modified Weighted Round Robin (MWRR)..........................................................................................26
QOS-AWARE Routing Protocols................................................................................................................27
CEDAR...................................................................................................................................................27
Multipath Routing Protocol..................................................................................................................28
Genetic Algorithm based QoS Routing protocol for MANETS (GAMAN).............................................28
Predictive location based QoS routing Mobile ad Hoc Network (PLBQR)...........................................29
QoS Multicast Routing protocol with Dynamic group topology (QMRPD)..........................................29
QoS optimized Link State Routing (QOLSR)..........................................................................................30
Issues and challenges while providing QOS in network...........................................................................31
Unreliable channel................................................................................................................................31
Maintenance of route...........................................................................................................................31
Mobility of node...................................................................................................................................32
Limited of power supply.......................................................................................................................32
4

Lack of centralized control....................................................................................................................32


Channel contention...............................................................................................................................32
Security.................................................................................................................................................32
Conclusion and Future work.....................................................................................................................33
References................................................................................................................................................33
5

Abstract
This article provides a comprehensive overview of QoS routing metrics, tools, and factors affecting QoS
routing protocol performance. Current QoS routing protocols are also examined and compared in terms
of their relative power, weakness, and applicability. QoS routing protocols are categorized based on the
QoS metrics that are used and the form of QoS guarantee that is given. The main aim of QoS
provisioning is to achieve more deterministic network activity so that network information can be
delivered more efficiently and network resources can be better exploited. Nowadays, the Internet only
provides the best possible service. Traffic is transmitted as quickly as possible, but there is no guarantee
of timeliness or packet delivery during transmission. With the rapid transformation of the Internet into a
commercial system, customer expectations for service quality have risen at a rapid pace. People in
today's world are heavily reliant on network services such as VOIP, video conferencing, and file transfer.
Those facilities make use of a variety of Traffic Management systems. One of the most critical
mechanisms in a traffic management system is queuing. Each router in the network must enforce some
sort of queuing discipline that governs how packets are buffered while awaiting transmission. The main
goal of this paper is to illustrate various queuing disciplines' quality of service (QoS) research. The main
aim of QoS provisioning is to achieve more deterministic network activity so that network data can be
distributed more reliably and network resources can be better used.
6
7

Introduction
Quality of Service (QoS) is a group of technologies that work together on a network to ensure that high-
priority applications and traffic are reliably delivered even when network capacity is reduced. This is
accomplished using QoS technologies, which provide differential handling and power allocation of
individual network traffic flows. This allows the network administrator to monitor the order in which
packets are processed as well as the amount of bandwidth available to that application or traffic flow.

Bandwidth (throughput), latency (delay), jitter (variance in latency), and error rate are all important QoS
metrics. As a result, QoS is especially important for high-bandwidth, real-time traffic like VoIP, video
conferencing, and video-on-demand, which are sensitive to latency and jitter. These applications are
referred to as "inelastic" because they have minimum bandwidth requirements and maximum latency
limits. Queuing and bandwidth control are the QoS systems for ordering packets and allocating
bandwidth, respectively. However, traffic must be differentiated using classification tools before they
can be enforced. Organizations can ensure continuity and sufficient resource availability for their most
critical applications by classifying traffic according to policy.

Traffic can be categorized in a number of ways, such by port or IP address, or by programme or user. The
latter parameters allow for more accurate identification and, as a result, data classification.

Following that, rules are delegated to queuing and bandwidth management software to handle traffic
flows based on the classification they got upon joining the network. Packets in traffic flows may be
queued before the network is able to process them, thanks to the queuing system. Priority Queuing (PQ)
was created to ensure that the most critical batches of applications and traffic have the required
availability and minimal latency of network performance by assigning them a priority and specific
bandwidth based on their classification. This means that the most critical operations on a network do
not suffer from a lack of bandwidth due to lower-priority activities. Users, applications, and traffic can be
divided into up to eight distinct queues.

Bandwidth management systems monitor and regulate traffic flows on the network in order to prevent
the network from exceeding its capacity and resulting in network congestion. Traffic shaping, a rate
limiting technique used to maximize or guarantee efficiency and increase available bandwidth where
possible, and scheduling algorithms, which provide a variety of methods for supplying bandwidth to
8

particular traffic flows, are two mechanisms for bandwidth management. The above facilities and
controls can be handled and consolidated down to a single box, depending on the provider. QoS via Palo
Alto Networks firewalls is an example of this. Differentiated Services Code Point (DSCP) is used to
communicate QoS steps and classification outside the box and to downstream network infrastructure.
DSCP assigns a classification to each packet and communicates this to each box it passes through,
ensuring that QoS policy is applied consistently.

The tendency of a network to support good providers in order to accept good customers is known as
QoS. In other words, QoS assesses user satisfaction as well as network efficiency. Although some
applications, such as FTP, HTTP, video conferencing, and e-mail, are unaffected by delays in transmitted
information, others, such as voice and video, are more vulnerable to information loss, delay, and jitter.
As a result, VoIP QoS is essential to ensure that voice packets are not lost or delayed while being
transmitted over the network. To improve VoIP QoS, different parameters such as (delay, jitter, and
packet loss) are calculated according to ITU guidelines. These parameters can be modified and
monitored within a reasonable range to improve VoIP QoS.

QoS routing protocols necessitate not only finding a route from point A to point B, but also a route that
meets the end-to-end QoS specifications, which are often expressed in terms of bandwidth (or) latency.
A network or service provider may provide users with a variety of services. A service can be defined in
this way by a collection of observable pre-specified service specifications, such as minimum bandwidth,
maximum latency, maximum delay variance (jitter), and maximum packet loss rate. After receiving a
service request from a user, the network must ensure that the user's flow's service requirements are
met, as agreed, during the flow's length (a packet from the source to the destination). The first task after
receiving a service request from a customer is to find a loop-free route from source to destination that
has the required resources to fulfil the QoS requirements of the requested service. QoS routing is the
term for this method.

In terms of jitter, reliability, delay, and bandwidth, different applications and data flows have different
requirements. The data flows and applications' Quality of Service (QoS) is determined by these
characteristics. Different streaming services, such as VoIP or video, can allow a certain amount of delay,
but not jitter, since even a small amount of jitter would affect the picture noticeably. However, since IP
networks are Best-Effort networks, none of these characteristics can be assured for an application or a
data flow. As a result, the aim of QoS is to provide predictability in performance for various applications.
9

There are several different types of delays that can be calculated in a network. Some of the more
popular are mentioned below:

 Propagation delay is the amount of time it takes a packet to travel through a medium.
 Queuing delay is the amount of time a packet spends waiting in a queue.
 A packet's processing delay is the amount of time it spends in a router.
 Delay in packetization – the time it takes to convert data into packets.
 Serialization – the amount of time it takes to ensure that packets are sent in the correct order.

By changing the propagation media and updating the hardware of the servers and routers, these delays
can be reduced. However, since this is a cost problem, it may not be an option, so implementing QoS
may boost the application's delay. Jitter is another word for delay variance, and it refers to the
difference in delay between different packets. The jitter is 1ms if the second packet has a delay of 2ms
and the third has a delay of 3ms. The jitter is calculated by subtracting the delays and then adding the
results. The "jitter measurement process" is the name given to this simple formula. Even if some links
have enough bandwidth to handle a large amount of traffic, several smaller flows at key aggregation
points in the network may cause congestion. Other connections can have different upload and
download rates, causing congestion and packet loss as a result. Congestion control, which manages the
queuing and dropping of packets, is an important part of QoS. Packets are only dropped when the
hardware or software's buffer has been filled. We can use Integrated or Differentiated Services to
identify the ap to ensure, or at least increase the probability, that none of the more significant packets
are lost.

In essence, the Integrated Service uses the Resource Reservation Protocol to reserve bandwidth for the
various applications (RSVP). The applications inform the network of the type of QoS they need, causing
the network to set aside bandwidth for them. The issue with integrated networks is that they are not
scalable, and all affected systems must support RSVP.

Characteristic of QoS
The major challenges in delivering QoS would result in higher computational and communication costs.
To put it another way, it takes longer to set up a link and keeps more state details per connection.
Although providing QoS for MANETs, the increase in network consumption counterbalances the increase
in state knowledge and the related complexity, and numerous issues must be addressed. The following
are the main issues that need to be addressed:
10

o Changing network topology dynamically


o Inaccurate state details
o Lack of centralized control
o Error-prone shared radio channel
o Hidden terminal issue
o Limited power supply
o Node mobility
o Insecure medium

Classification of QoS
1. Based on the Quality of Service (QoS) strategy used.
2. On the basis of the layer.
3. Other Quality of Service (QoS) Options

Based on the Quality of Service (QoS) strategy used.


Based on the interaction between the routing protocol and the QoS provisioning mechanism, based on
the interaction between the network and the MAC layer, and Based on the routing upgrade mechanism
are the three types of QoS approaches.

a) Based on how the routing protocol interacts with the QoS provisioning process.
Coupled Qos Approach and Decoupled Qos Approach are two types of QoS approaches that can
be defined as follows:

Coupled QoS approach


The interaction of the routing protocol with the QoS provisioning mechanism. The following are two
forms of QoS approaches: Coupled Qos Approach and Decoupled Qos Approach:

TBP (Ticket-Based QoS Routing Protocol) is a ticket-based QoS routing protocol.

 Predicate Location-Based QoS Routing Protocol (PLBQR):


 TDR stands for "Trigger-Based Distributed QoS Routing Protocol."
 Adhoc On-Demand Distance Vector Routing Protocol with QoS Enabled QoSAODV.
 BR (Bandwidth Routing Protocol) is an acronym for Bandwidth Routing Protocol.
 On-Demand QoS Routing Protocols (OQR).
11

 On-Demand Link-State Multipath QoS Routing Protocol (OLMQR-On-Demand Link-State


Multipath QoS Routing Protocol).
 Asynchronous Slot Allocation Strategies (AQR-Asynchronous Slot Allocation Strategies).
 CEDAR stands for Core Extraction Distributed Ad hoc Routing Protocol.

Decoupled QoS approach


The QoS provisioning mechanism in the decoupled method does not rely on any particular routing
protocol to ensure QoS guarantees.

• INSIGNIA

• Stateless Wireless Adhoc Networks, or SWAN.

• PRTMAC (Proactive Real-Time MAC) 

Based on how the Routing Protocol and the MAC Protocol communicate.
As shown below, there are two types of QoS approaches: independent QoS approaches and dependent
QoS approaches.

Independent QoS approaches


The network layer is not reliant on the MAC layer for QoS provisioning in separate QoS approaches.

 TBP stands for Ticket-Based Quality of Service Routing Protocol.


 PLBQR (Predicate Location-Based QoS Routing Protocol)
 QoSAODV (QoS Enabled Adhoc Demand Distance Vector Routing Protocol) is a QoS enabled
adhoc on-demand distance vector routing protocol.
 INSIGNIA
 INORA
 Stateless Wireless Adhoc Networks (SWAN).

Dependent QoS approaches


The MAC layer must assist the routing for QoS provisioning in the based QoS approach.

 TDR-Trigger-Based Distributed Routing Protocol


 BR (Bandwidth Routing Protocol)
 OLMQR (On-Demand Link-State Multipath QoS Routing Protocol)
 Asynchronous Slot Allocation Strategies (AQR).
 The Core Extraction Distributed Adhoc Routing Protocol (CEDAR) Distributed Adhoc Routing
Protocol.
12

 PRTMAC (Proactive Real-Time MAC)

In accordance with the routing information update process in use.


Table-Driven QoS Approaches, On-Demand QoS Approaches, and Hybrid QoS Approaches are three
types of QoS approaches that use routing information update mechanisms.

Table-Driven QoS Approaches


Each node in the network maintains a routing table that aids in packet forwarding in the Table-Driven
approach.

 PLBQR (Predicate Location-Based QoS Routing Protocol)

On-Demand QoS Approaches


Since no such tables are maintained at the nodes in the On-Demand approaches, the source node must
discover the route on the fly.

 TBP stands for Ticket-Based Quality of Service Routing Protocol.


 TDR (Trigger-Based Distributed Routing Protocol)
 QoS AODV (QoS Enabled Adhoc On-Demand Distance Vector Routing Protocol) is a QoS enabled
adhoc on-demand distance vector routing protocol.
 On-Demand QoS Routing Protocols (OQR).
 OLMQR (On-Demand Link-State Multipath QoS Routing Protocol)
 Asynchronous Slot Allocation Strategies (AQR).
 PRTMAC (Proactive Real-Time MAC)
 INORA is a fictional character.

Hybrid QoS Approaches


The hybrid approach combines elements of both table-driven and OnDemand methods.

 BR (Bandwidth Routing Protocol) is an acronym for Bandwidth Routing Protocol


 CEDAR (Core Extraction Distributed Adhoc Routing Protocol)

Evaluation metric for QoS protocols


Since different applications have different needs, the resources they need and the QoS parameters that
go with them vary from one application to the next. For example, bandwidth, latency, and delay-jitter
are important QoS parameters in multimedia applications, while military applications have strict security
13

specifications. The metrics mentioned below are some of the most common metrics used by
applications to define QoS requirements to routing protocols.

 Minimum Throughput (bps) – the optimal data throughput for the application.
 Maximum Delay (s) – the maximum end-to-end delay for data packets that can be tolerated.
 Maximum Delay Jitter – the difference between the upper bound and the absolute minimum
end-to-end delay.
 Maximum Packet Loss Ratio - the percentage of total packets sent that are not received by the
final destination node that is permissible. A metric's value over the entire path will take one of
the following forms:
 Additive metrics: The following is a statistical representation:

Where m (p) represents the total number of metric m in the path (p), lki represents a point in the path
(p), LK represents the number of links in the path (p), and i= 1,...LK This form of composition includes
things like delay, delay variance (jitter), and expense. Various factors that influence communication
network delay are discussed in.

 Concave metrics: The following is a statistical representation: This form of composition is


exemplified by bandwidth. The bandwidth we're interested in is the usable residual bandwidth
for new traffic. It can be specified as the minimum of all links on the path's residual bandwidth,
or the bottleneck bandwidth.

 Multiplicative metrics. The following is a statistical representation of this:

Loss probability is an example of this form of composition in the abstract.


14

 Metrics that are convex: This can be expressed as m(p)=max (m(lki)) where m(p)=max (m(lki))
where m(p)=max (m(lki)) where m(p)=max (m(lki) The convex rule is used to calculate
vulnerability (in the sense of security) and throughput. Whatever metrics are used to determine
the direction, they must represent the basic network properties that are of interest. Residual
bandwidth, delay, and jitter are examples of such metrics. The metrics describe the types of
QoS guarantees the network will accept since the flow QoS specifications must be mapped into
path metrics. QoS-based routing, on the other hand, is unable to accept QoS r.

Criteria of QoS Routing protocol classification


QoS-based protocols for route discovery QoS approaches can be divided into three groups based on the
routing information update process used: proactive, on-demand, and hybrid QoS approaches. A routing
table is held at each node for the purpose of forwarding packets in proactive protocols. These tables are
modified on a regular basis to ensure that all nodes have the most up-to-date routing information. As a
result, if the source code requires a routing route, it can obtain one immediately. QOLSR (QoS Optimized
Connection State Routing) and PLBQR are two popular proactive QoS routing protocols (Predictive
Location-Based QoS Routing in Mobile Ad Hoc Networks). On-demand protocols are also known as
reactive protocols. When there is no traffic, reactive protocols do not need the maintenance of network
topology. When the state information is required, it is obtained. Route maintenance, on the other hand,
is a critical function of reactive routing protocols, since source nodes can experience long delays in route
searching before being able to forward data packets. Reactive routing protocols include QoS AODV (QoS
Ad-hoc on-Demand Distance Vector), ACMP (Adaptive Core based Routing Protocol with Consolidated
Query Packets), and CQMP (Mesh-based Multicast Routing Protocol with Consolidated Query Packets).
The reactive routing protocols have a major advantage over proactive routing protocols in terms of
control overhead. As the name suggests, a hybrid protocol is a mixture of constructive and reactive
methods. As a result, hybrid protocols are designed to resolve both reliability and robustness. EHMRP
(Effective Hybrid Multicast Routing Protocol) is an example of a hybrid-based QoS routing protocol.

Single constrained vs. multi constrained metrics


Since throughput was considered the most essential requirement in the past, the majority of the
protocols concentrated solely on delivering an assured throughput operation. These single-constrained
routing protocols have had a lot of success, but they don't always perform well. The bandwidth is the
only QoS parameter used in CEDAR for routing. Most multimedia applications demand that
communication meet strict criteria for delay, delay-jitter, cost, and other quality-of-service metrics. The
15

trend in this area is to shift away from single-constrained routing and toward multi-constrained routing.
The trend in this area is to shift away from single-constrained routing and toward multi-constrained
routing. The main goal of multiconstrained QoS routing is to find a feasible route that meets several
constraints at the same time, which is a difficult task in MANETs where the topology is constantly
changing. Such a problem has been shown to be NP-complete. QMRPD (QoS Multicast Routing Protocol
for Dynamic Group Topology) [33] is a QoS Multicast Routing Protocol for Dynamic Group Topology.
Typical multi constrained routing protocols include GAMAN (Genetic Algorithm-based Routing for
MANETs) and HMCOP (Heuristic Multi Constrained Optimal Path).

Hard QoS vs. Soft QoS


Hard QoS and soft QoS approaches are the two broad types of QoS provisioning approaches. The QoS
approach is called hard QoS if the connection's QoS specifications are guaranteed to be fulfilled for the
duration of the session. It is extremely difficult to provide hard QoS assurances to user applications in
MANETs. NSR and SIRCCR are two of the protocols (SIR and Channel Capacity based Routing). Soft QoS
refers to a QoS method in which the QoS conditions are not guaranteed for the duration of the session.
As a result, QoS guarantees can only be made within statistical limits. Soft QoS assurances are given by
the majority of protocols.

When several links of varying bandwidth bind to a router, for example, congestion occurs. If the
incoming data rate exceeds the outgoing data rate, packets will queue until the router buffer is
complete, at which point all incoming packets will be lost. The queueing in the buffer, as well as the
retransmission of TCP packets, are two causes of delay. Different schemes may be used to combat the
impact of traffic congestion. Congestion prevention, congestion avoidance, and congestion detection are
the three groups under which these schemes fall.

Congestion management

QOS protocols

First-in, First-out (FIFO) Queuing  


The simplest queue management discipline is first-in, first-out (FIFO). All packets are handled similarly in
FIFO queuing, which involves loading them all into a single queue and then servicing them in the same
order as they were put in the queue. First-come, first-served (FCFS) queuing is another name for FIFO
queuing.
16

FIFO benefits and limitation

The following are some of the advantages of FIFO queuing:

 When opposed to more elaborate queue scheduling disciplines, FIFO queuing places an
incredibly low computational burden on software-based routers
 A FIFO queue's behaviour is very predictable—packets are not reordered, and the maximum
delay is determined by the queue's maximum width.
 FIFO queuing provides easy contention resolution for network resources while not adding
substantially to the queuing delay faced at each hop as long as the queue depth remains low.

The following are some of the drawbacks of FIFO queuing:

 A single FIFO queue does not enable routers to arrange buffered packets and then service one
class of traffic differently than other classes of traffic.
 Since the mean queuing delay for all flows increases as congestion increases, a single FIFO
queue has an equivalent effect on all flows. As a result, real-time applications traversing a FIFO
queue can experience increased delay, jitter, and loss as a result of FIFO queuing.
 FIFO queuing favors UDP flows over TCP flows during times of congestion. TCP-based
applications decrease their transmission rate when packet loss occurs due to congestion, but
UDP-based applications are unaffected by packet loss and continue to transmit packets at their
normal rate. FIFO queuing will cause increased latency, jitter, and a reduction in the amount of
output bandwidth consumed by TCP applications traversing the queue because TCP-based
applications slow their transmission rate to adjust to changing network conditions.
 A bursty flow will fill up a FIFO queue's buffer space, preventing all other flows from receiving
service until the burst is completed. Other well-behaved TCP and UDP flows traversing the
queue can experience increased delay, jitter, and loss as a result of this.
17

Fig. First in, first out (FIFQ) Queuing

FIFO implementation and application

A bursty flow will fill up a FIFO queue's buffer space, preventing all other flows from receiving service
until the burst is completed. Other well-behaved TCP and UDP flows traversing the queue can
experience increased delay, jitter, and loss as a result of this.

Priority Queuing (PQ)


There are four predefined queues in the Priority Queue (PQ), each with a different ability. The high-
priority queue has a default capacity of 20 packets, the medium-priority queue has a default capacity of
40 packets, the normal-priority queue has a default capacity of 60 packets, and the low-priority queue
has a default capacity of 80 packets. Figure shows a PQ with eight distinct flows. As can be shown,
various queues are allocated to the flows based on the priority of the class. The network administrator is
in charge of allocating traffic to these four queues. The PQ avoids sending packets from lower priority
queues as soon as there are packets in a higher prioritized queue. However, if the PQ is serving a lower-
priority packet, it will finish sending it because the queues are served in a non-preemptive priority
manner by the PQ.

Priority Queuing (PQ) benefits and limitation

PQ has a number of advantages:


18

 When opposed to more elaborate queuing disciplines, PQ places a relatively low computational
burden on software-based routers.
 PQ enables routers to arrange buffered packets and then service one traffic class differently
than other traffic classes. You may, for example, set priorities such that real-time applications,
such as interactive voice and video, take precedence over non-real-time applications.

Fig. Priority Queuing

However, PQ has a number of drawbacks:

 Lower-priority traffic may experience unnecessary delay as it waits for unbounded higher-
priority traffic to be serviced if the amount of high-priority traffic is not policed or conditioned at
the network's edges.
 Lower-priority traffic may be dropped if the buffer space allocated to low-priority queues begins
to overload if the amount of higher-priority traffic becomes overwhelming. If this happens, the
combination of packet fall, increased latency, and packet retransmission by host systems could
eventually result in total resource depletion for lower-priority traffic. Strict PQ will establish a
network environment in which a decrease in the level of service provided to the highest-priority
service is postponed until the entire network is dedicated to processing only the highest-priority
service class.
 A misbehaving high-priority flow will greatly increase the amount of jitter and delay faced by
other high-priority flows in the queue.
19

 During times of congestion, PQ is not a solution to solve the restriction of FIFO queuing, which
favors UDP flows over TCP flows. TCP window management and flow control systems will
attempt to use all of the available bandwidth on the output port if you use PQ to position TCP
flows in a higher-priority queue than UDP flows, starving your lower-priority UDP flows.

PQ implementation and application

PQ can typically be configured to run in one of two modes by router vendors:

 Priority queuing is strictly enforced.


 Priority queuing with a rate limit
 Strict PQ guarantees that packets in a high-priority queue are always scheduled ahead of those
in lower-priority queues. Of course, the drawback to this approach is that an overwhelming
amount of high-priority traffic will cause lower-priority service classes to run out of bandwidth.
Some carriers, on the other hand, may want their networks to promote this form of activity.
Assume that a regulatory agency mandates that, in order to carry VoIP traffic, a service provider
must consent (under penalty of a large fine) not to drop VoIP traffic in order to ensure a
consistent standard of service, regardless of network congestion. Congestion could be caused by
inaccuracy in admission control, resulting in an influx of VoIP traffic or even a network failure.
This behaviour can be facilitated by using strict PQ without a bandwidth cap, putting VoIP traffic
in the highest-priority queue, and allowing the VoIP queue to consume bandwidth that would
otherwise be allocated to lower-priority queues if required. If the fines levied by the regulatory
agency surpass the rebates it is allowed to offer other customers for reduced service, a company
may be able to tolerate this form of action.
 Only if the amount of traffic in the high-priority queue remains below a user-configured
threshold will packets in lower-priority queues be scheduled before packets in higher-priority
queues. Consider the case where a high-priority queue has been rate-limited to 20% of the
output port bandwidth. Packets from the high-priority queue are scheduled ahead of packets
from lower-priority queues as long as the high-priority queue uses less than 20% of the output
port bandwidth. If the high-priority queue uses more than 20% of the output port bandwidth,
lower-priority queue packets may be scheduled ahead of high-priority queue packets. As this
happens, there are no guidelines, so each provider decides how to prioritize lower-priority
packets over higher-priority packets in their implementation.
20

At the edges and in the centre of your network, there are two key applications for PQ:

 By allowing you to delegate routing-protocol and other forms of network-control traffic to the
highest-priority queue during times of congestion, PQ can improve network stability.
 PQ enables the delivery of a service class with high throughput, low latency, low jitter, and low
loss. This capability enables you to deliver real-time applications like interactive voice or video,
as well as support TDM circuit emulation and SNA traffic, by prioritizing these services over all
others.

To avoid high-priority queues from being oversubscribed, you must effectively condition traffic at the
network's edges to serve these types of services. If you skip this step in the design phase, you'll find that
supporting these programmes is difficult. The main problem is that it's much easier to condition traffic
and assign bandwidth to a queue for some applications than it is for others. For example, provisioning
resources for a well-defined application like VoIP, where you know the packet size, traffic volume, and
traffic behaviour, is much simpler than provisioning resources for other types of applications like
interactive video, where there are simply too many variables. The existence of these unknowns makes
configuring traffic conditioning levels, maximum queue depths, and bandwidth limits for high-priority
queues extremely difficult.

Custom queuing (CQ)


Custom queuing allots a certain amount of bandwidth to each form of traffic. To set up custom queuing,
the network engineer must first set aside bandwidth for each form of traffic. If one type of traffic is not
using the bandwidth, the bandwidth can be used by another type of traffic. Custom queues are created
using the round robin algorithm, which holds the queues in a round-robin order and allocates bandwidth
to each queue before moving on to the next. If the queue is empty, the router sends packets from the
next queue, which has ready-to-send packets. However, in essence, packet queuing is first in, first out,
but various types of traffic will share bandwidth. Custom queuing is configured in a router for SNA
queue to take 4000 bytes, 2000 bytes from telnet, and 2000 bytes left for default queue in the diagram
below. The percentages of bandwidth allocated are 50, 25, and 25%, respectively. If SNA isn't using its
bandwidth, other queues can use half of SNA's bandwidth before SNA needs it again.
21

Fig. Custom queuing

Custom queuing is not widely used in today's networks, but it helps network engineers to ensure that
each application gets a guaranteed percentage of the connection, in this case for Telnet, SNA, and FTP.
However, unless high-priority traffic is divided into separate conversations using LOCADDR prioritization,
SNA would share the same output queue, potentially slowing down interactive SNA response times. It
does not guarantee a delay, and custom queuing cannot be used in time-sensitive applications such as
video and voice, where delays are not accepted, so it is not recommended for such applications.

Weighted fair queuing (WFQ)


Lixia Zhang, Alan Demers, Srinivasan Keshav, and Scott Shenke created weighted equal queuing (WFQ)
independently in 1989. WFQ is the foundation for a subset of queue scheduling disciplines that are
intended to overcome the FQ model's limitations:

WFQ accommodates flows of varying bandwidth requirements by allocating a different percentage of


output port bandwidth to each queue. WFQ also supports variable-length packets, ensuring that larger
packet flows are not given more bandwidth than smaller packet flows. Supporting equal bandwidth
allocation when forwarding variable-length packets increases the computational complexity of the
queue scheduling algorithm significantly. This is the primary reason why queue scheduling disciplines in
fixed-length, cell-based ATM networks are much simpler to enforce than in variable-length, packet-
based IP networks.

A.K.J. Parekh demonstrated in 1992 that WFQ can provide strong upper-bound, end-to-end delay for
sessions formed at the network's edges by token or leaky bucket rate regulation.

WFQ benefits and limitation

Weighted fair queuing (WFQ) has two main advantages:

 it protects each service class by providing a minimum level of output port bandwidth regardless
of the actions of other service classes.
22

 WFQ guarantees a weighted equal share of output port bandwidth to proper service class with
a bounded delay when combined with traffic conditioning at the network's edges.

Fig. A Weighted Bit-by-bit Round-robin Scheduler with a Packet Reassembler

Weighted fair queuing, on the other hand, has a number of drawbacks:

 WFQ implementations by vendors are software-based rather than hardware-based. WFQ can
only be used on low-speed interfaces at the network's edges because of this.
 A misbehaving flow within a highly aggregated service class can have an effect on the output of
other flows within the same service class.
 WFQ uses a complicated algorithm that necessitates the storage of a large amount of per-
service class state as well as iterative state scans on each packet arrival and departure.
 When trying to accommodate a large number of service groups on high-speed interfaces,
computational complexity has an effect on WFQ's scalability.
 When considering the small amount of serialization delay introduced by high-speed links and
the lower computational requirements of other queue scheduling disciplines, reducing delay to
the granularity of a single packet transmission might not be worth the computational cost on
high-speed interfaces.
 Finally, while WFQ's guaranteed delay bounds are better than those provided by other queue
scheduling disciplines, they can still be very high.

Enhancement in WFQ
23

 Since its inception in 1989, several variations of WFQ have been created, each with its own
set of trade-offs aimed at balancing complexity, accuracy, and efficiency. These four WFQ
variants are among the most well-known:
 Class based WFQ assigns packets to queues based on packet classification criteria specified
by the user. The setting of the IP precedence bits, for example, will allocate packets to a
specific queue. After packets are allocated to queues, they will receive prioritized service
based on user-configured weights for each queue.
 Self-recording Fair Queuing (SCFQ) is a WFQ enhancement that makes measuring the finish
time in a corresponding GPS system easier. If the complexity of the system decreases, the
worst-case delay increases, and the delay increases with the number of service classes.
 Worst-case scenario Fair Weighted Fair Queuing (WF2Q) is an improvement on WFQ that
uses both packet start and finish times to simulate a GPS device more accurately.
 Worst-case scenario Fair Weighted Fair Queuing+ (WF2Q+) is an improved version of WF2Q
that uses a new virtual time feature to reduce complexity and improve accuracy.

WFQ implementation and application

WFQ is used at the network's edges to ensure that bandwidth is distributed evenly among a variety of
service classes. WFQ can be set up to accommodate a number of different behaviours:

 Using a hash function calculated through the source/destination address pair,


source/destination UDP/TCP port numbers, and the IP ToS byte, WFQ can be configured to
classify packets into a reasonably large number of queues.
 WFQ can be set up to allow the system to schedule only a small number of queues for
aggregated traffic flows. To allocate packets to queues in this configuration choice, the device
uses QoS policy or the three low-order IP precedence bits in the IP ToS byte. Based on the
weight that the device determines for each of the service groups, each queue is assigned a
different percentage of output port bandwidth. This method allows the device to assign
different amounts of bandwidth to each queue depending on the QoS policy category, or to
allocate increasing amounts of bandwidth to each queue as the IP priority increases.
 By simulating a generalized processor sharing (GPS) scheme, WFQ allows for a reasonable
allocation of bandwidth for variable-length packets. Although GPS is a theoretical scheduler that
cannot be implemented, it behaves similarly to a weighted bit-by-bit round-robin scheduling
discipline. Individual bits from packets at the head of each queue are distributed in a WRR
24

manner in a weighted bit-by-bit round-robin scheduling discipline. Since it considers packet


length, this method promotes a rational allocation of bandwidth.

Deficit Weighted Round Robin (DWRR) 

M. Shreedhar and G. Varghese suggested deficit weighted round robin (DWRR) queuing in
1995. The DWRR model is the foundation for a class of queue scheduling disciplines that resolve
the shortcomings of the WRR and WFQ models.

 When servicing queues containing variable-length packets, DWRR solves the


shortcomings of the WRR model by accurately supporting the weighted equal allocation
of bandwidth.

 The WFQ model's shortcomings are addressed by DWRR, which defines a scheduling
discipline with lower computational complexity and hardware implementation. This
enables DWRR to facilitate output port bandwidth arbitration on high-speed interfaces
in both the core and the edge.
Every queue in DWRR queuing is configured with a set of parameters:

 The percentage of the output port bandwidth allocated to the queue is defined by this weight.
 A Deficit Counter that determines the maximum number of bytes that the queue is allowed to
transmit each time the scheduler visits it. The Deficit Counter allows a queue that was not
allowed to transmit in the previous round because the packet at the front of the queue was
greater than the Deficit Counter's value to save transmission "credits" for use in the next service
round.
 A unit of operation that is proportional to the weight of the queue and is measured in bytes.
Every time the scheduler visits a queue, the Deficit Counter for that queue is incremented by the
quantum. When both queues are active, and quantum[i] = 2*quantum[x], queue I will obtain
twice the bandwidth as queue x.

DWRR queuing has the following advantages:


25

 It protects various flows by ensuring that a badly behaved service class in one queue does
not affect the efficiency of other service classes allocated to other queues on the same
output port;
 When forwarding variable-length packets, it overcomes the limitations of WRR by allowing
precise control over the percentage of output port bandwidth allocated to each service
class.
 To prevent bandwidth starvation, it overcomes the limitations of strict PQ by ensuring that
all service groups have access to at least some configured amount of output port
bandwidth; and
 From a computational standpoint, implements a relatively simple and inexpensive algorithm
that does not necessitate the maintenance of a large amount of per-service class state.

DWRR queuing, like other templates, has limitations:

 Because of the highly aggregated nature of service classes, a misbehaving flow within one will
affect the output of other flows within the same service class. However, routers are expected to
schedule aggregate flows in the centre of a broad IP network because the large number of
individual flows makes per-flow queue scheduling disciplines impractical.
 Other queue scheduling disciplines, such as DWRR, may not be as reliable. The accuracy of
bandwidth allocation is less important over high-speed links than it is over low-speed links.

Fig. DWRR
26

DWRR implementation and application


Since the DWRR queue scheduling discipline can be applied in hardware, it can be used to arbitrate the
weighted allocation of output port bandwidth among a fixed number of service groups in both the
centre and the edges of the network. DWRR offers all of the advantages of WRR while also fixing WRR's
shortcomings by allowing for precise bandwidth allocation when scheduling variable-length packets.

Modified deficit Round Robin Queuing Discipline (MDRR)


MDRR

The MDRR (Modified Deficit Round Robin) is a queuing system found only in Cisco Gigabit Switch
Routers. Despite this, the queuing approach is supported by any router in the OPNET simulation
software. The MDRR queue classifies packets based on their IP precedence area, allowing it to map up to
eight different classes, each of which can contain multiple flows. Each queue has a fixed bandwidth and
serves packets in a FIFO manner, with tail-drop and WRED support.

Fig. Modified deficit Round Robin

MDRR introduces a new parameter called the Quantum Value (QV), which is the sum of the weight and
the MTU. The initial deficit value of the queue is set to this QV, and it is reduced by a value equal to the
packet length in bytes. MDRR can accommodate up to eight queues, with all but one being served round
robin. With one exception, the queue is a low-latency queue that can be set to either strict or alternative
priority mode. The distinction between the two is that in strict priority mode, the low-latency queue is
27

always served as long as it is not empty, while in alternative priority mode, the low-latency queue is
served first, then one of the other queues, then the low-latency queue, and so on. As with the regular
PQ, it's vital not to overload the low-latency queue with too many or too large flows, as this can cause
starvation in the other queues. The key distinction between the MDRR and DWRR queuing approaches is
the MDRR's low-latency queue.

 Unempty queues are served one by one in a round robin way with modifiable deficit round robin
queuing discipline. A fixed date is dequeued while a queue is being served. Then the next queue
serves round robin algorithms. When the queue is served, MDDR maintains information that has
been dequeued above a configured value for the number of bytes of data. Next time, less data is
served to compensate for the excess data previously served when queue is again served. The
average quantity of data being dequeued is therefore near the configured value. The MDRR
maintains even a pre-emptive priority queue. The MDRR has features:

The average number of bytes served in each round is known as the quantum value.

 Deficiency counter – this counter keeps track of how many bytes a queue has transmitted in
each round. The quantum value was used to start the deficit counter.
 Every queue's packets are served until the deficit counter value exceeds zero. After serving the
packet, the deficit counter's value decreases by the same amount as its duration in bytes.

After the deficit counter reaches zero, a queue will no longer be served. Then a new round begins, with
the deficit counter in the non-empty queue being increased by the quantum value. Each MDRR queue is
assigned a weight, and any of the queues can be designated as a priority queue. When the interface is
congested, the weight assigns each queue relative bandwidth. If there is data in queue that needs to be
sent, the MDRR algorithm dequeues data in a round robin fashion from each queue.

Modified Weighted Round Robin (MWRR)


Every queue in Modified Weighted Round Robin has a weight, and each queue's deficit counter is given
a value that corresponds to the queue's weight. The queue will be served as long as the deficit value is
greater than zero; otherwise, the queue will be skipped. The deficit value is reduced by the same
number of cells as the number of packets served with each packet served. The weight of the queue
increases the deficit value of the queue with each turn. The deficit value is reduced to zero and the next
queue is served if the queue is empty on its turn.
28

Fig. Modified Weighted Round Robin

QOS-AWARE Routing Protocols


The primary aim of QoS-aware routing protocols is to find a route from a source to a destination that
meets the desired QoS requirements. Within the constraints of bandwidth, minimal search, distance,
and traffic conditions, the QoS-aware path is calculated. The routing protocol can be defined as QoS-
aware since path selection is based on the desired QoS. Several routing protocols for locating QoS paths
have been suggested in the literature. Some of these QoS routing protocols are listed in the sections
below.

CEDAR
In ad hoc networks, the Core-Extraction Distributed Ad hoc Routing (CEDAR) algorithm is proposed for
QoS routing. To locate and avoid congested areas of the network, elected subset nodes advertise
bandwidth information along with their connection state updates. When a connection fails, CEDAR's
route re-computation is limited to the area surrounding the failure. Extraction of the core: A group of
nodes is chosen to form the nucleus, which is responsible for maintaining the local topology of the
nodes in its domain as well as route computations. The core nodes are chosen by approximating the ad
hoc network's minimum dominant set1. Propagation of connection state: CEDAR achieves QoS routing
by propagation. The basic concept is that information about secure high-bandwidth links can be
broadcast to nodes located far away in the network, while information about dynamic or low-bandwidth
links stays local. Calculation of the route: The core path from the source's domain to the destination's
domain is first established during route computation. CEDAR iteratively tries to find a partial route from
the source to the domain of the farthest possible node in the core path that meets the requested
bandwidth using the directional information given by the core path. MRP (Multipath Routing Protocol) is
29

a reactive on-demand routing protocol that extends the DSR protocol to find multipath routing with
bandwidth and reliability constraints. It is divided into three stages: exploration, maintenance, and
traffic allocation. The protocol selects many different alternate paths that meet the QoS criteria during
the routing discovery process, and the optimal number of multipath routing is achieved to balance load
balancing and network overhead. Similar to DSR, it can effectively deal with route failures during the
routing maintenance process. Furthermore, the per-packet granularity is adopted in traffic allocation
phase.

Multipath Routing Protocol


MRP (Multipath Routing Protocol) is a reactive on-demand routing protocol that extends the DSR
protocol to find multipath routing with bandwidth and reliability constraints. It is divided into three
stages: exploration, maintenance, and traffic allocation. The protocol selects many different alternate
paths that meet the QoS criteria during the routing discovery process, and the optimal number of
multipath routing is achieved to balance load balancing and network overhead. Similar to DSR, it can
effectively deal with route failures during the routing maintenance process. In addition, in the traffic
allocation process, per-packet granularity is used.

Genetic Algorithm based QoS Routing protocol for MANETS (GAMAN)


A Genetic Algorithm-based source-directing Protocol for MANETs (GAMAN) is proposed, which uses
start to finish postponement and transmission achievement rate for QoS measurements. Hereditary
Algorithms (GAs) might be utilized for heuristically approximating an ideal answer for an issue, for this
situation tracking down the ideal course dependent on the two QoS requirements referenced. The
principal phase of the interaction includes encoding courses with the goal that a GA can be applied; this
is named quality coding. For this reason, ways are found on request and afterward an organization
geography see is built in a sensible tree-like construction. Every hub stores a tree steered at itself with
its neighbor hubs as youngster hubs and thusly their neighbor hubs as their kids. The course disclosure
calculation is expected to gather privately processed measurements, for example, normal deferral over
a connection and the connection dependability for the connections on every way. After the quality
encoding stage, the wellness T of every way is determined as

where Di and Ri are the deferral and dependability of connection I, decently. The wellness esteems are
utilized to choose ways for get over reproducing and transformation activities. The fittest way (with the
littlest T) and the posterity from the hereditary tasks are conveyed forward into the future. While this
strategy is a helpful heuristic for approximating the ideal worth over the postponement and connection
30

dependability measurements simultaneously, it requires numerous ways to be looked to gather enough


"hereditary data" for the GA tasks to be significant. This implies that the strategy isn't fit to huge
organizations.

Predictive location based QoS routing Mobile ad Hoc Network (PLBQR)


It is an area mindful QoS directing convention in which an area defer expectation conspire, in light of an
area asset update convention has been performed. The area refreshes contain asset data relating to the
hub sending the update. This asset data for all hubs in the organization and the area expectation system
are together utilized in the QoS directing choices. There are dynamic changes in geography and asset
accessibility because of the serious level of versatility of hubs in the impromptu organization. Because of
these changes, the topological and directing data utilized by current organization conventions is
delivered out of date rapidly. The benefit of this framework is the expectation of new area dependent
on past area is made when there is variety in the geological area. QoS steering dependent on the asset
accessibility at the transitional hubs in the source to objective course is performed which is uncommon
in other area based directing plan. Be that as it may, exact expectation on speed and bearing isn't made
when there are dynamic shifts in the course. The transmission is made uniquely in straight example (i.e.,
rakish speed is kept as nothing).

QoS Multicast Routing protocol with Dynamic group topology (QMRPD)


The QMRPD is a half and half convention which endeavors to fundamentally decrease the overhead of
building a multicast tree with various QoS imperatives. In QMRPD, a multicast bunch part can join or
leave a multicast meeting powerfully, which ought not to upset the multicast tree. It fulfills the different
QoS imperatives and most minimal expense's (or lower cost) prerequisites. Its principal objective is to
develop a multicast tree that enhances a specific target work (e.g., utilizing network assets) as for
execution related requirements (e.g., start to finish postpone bound, between beneficiary deferral jitter
bound, least data transfer capacity accessible, and greatest parcel misfortune likelihood) and plan a
multicast steering convention with dynamic gathering geography. It endeavors to limit the general
expense of the tree. The unique gathering enrollment has been taken care of by this convention with
less message preparing overhead.
31

QoS optimized Link State Routing (QOLSR)


The Optimized Link State Routing (OLSR) convention is a proactive connection state steering convention
for MANETs. One key thought is to diminish control overhead by lessening the quantity of transmissions
as contrasted and unadulterated flooding components. The essential idea to help this thought in OLSR is
the utilization of multipoint transfers (MPRs). MPRs allude to chosen switches that can advance
transmission messages during the flooding interaction. To decrease the size of transmission messages,
each switch proclaims just a little subset of the entirety of its neighbors. "The convention is especially
appropriate for huge and thick organizations". MPRs go about as transitional switches in course
revelation techniques. Subsequently, the way found by OLSR may not be the briefest way. This is an
expected disservice of OLSR. OLSR has three capacities: bundle sending, neighbor detecting, and
geography revelation. Bundle sending and neighbor detecting systems give switches data about
neighbors and offer an advanced method to flood messages in the OLSR network utilizing MPRs. The
neighbor detecting activity permits switches to diffuse nearby data to the entire organization.
32

Geography disclosure is utilized to decide the geography of the whole organize and ascertain steering
tables. OLSR utilizes four message types: Hello message, Topology Control (TC) message, Multiple
Interface Declaration (MID) message, and Host and Network Association (HNA) message. Hi messages
are utilized for neighbor detecting. Geography assertions depend on TC messages. MID messages
contain numerous interface addresses and play out the errand of different interface announcements.
Since has that have various interfaces associated with various subnets, HNA messages are utilized to
pronounce have and related organization data. Expansions of message types may incorporate force
saving mode, multicast mode, and so on.

Issues and challenges while providing QOS in network


The provision of QoS would increase the cost of computing and communication. To put it another way, it
takes longer to set up a link and keeps more state details per connection. Although providing QoS for
MANETs, the increase in network consumption counterbalances the increase in state knowledge and the
related complexity, and numerous issues must be addressed. The following are the main issues that
need to be addressed:

Unreliable channel
The key issue that occurs as a result of the insecure wireless channels is bit errors. Owing to high
interference, thermal noise, multipath fading effects, and other factors, these channels have a high bit
error rate. As a result, the packet distribution ratio is poor. Since MANETs use wireless technology, there
is a risk of information leakage into the environment.

Maintenance of route
The evolving behaviour of the communication medium and the complex design of the network topology
make maintaining network state knowledge extremely difficult. Even during the data transfer process,
proven routing paths can be broken. As a result, maintaining and reconstructing routing paths with
minimal overhead and delay causes is needed. The QoS aware routing will necessitate resource
reservations at intermediate nodes. Reservation management becomes more difficult as topology
changes.
33

Mobility of node
Since the nodes under consideration are mobile, that is, they travel in any direction and at any speed,
the topology information must be modified regularly and appropriately in order to provide routing to
the final destination, resulting in a lower packet delivery ratio.

Limited of power supply


In comparison to nodes in wired networks, mobile nodes are typically restricted by insufficient power
supply. Due to overhead from mobile nodes, providing QoS requires more electricity, which can rapidly
deplete the node's power.

Lack of centralized control


Any ad hoc network's participants may join or leave the network at any time, and the network is created
on the fly. Since QoS state information must be disseminated efficiently, there may not be any provision
of centralized control on the nodes, resulting in increased algorithm overhead and complexity.

Channel contention
To provide the network topology, nodes in a MANET must communicate with each other on a common
channel. This, however, raises issues such as interference and channel contention. These can be avoided
in a variety of ways for peer-to-peer data communications. One method is to use a TDMA-based device
with global clock synchronization and each node transmitting at a predetermined time. Since there is no
centralized control on the nodes, this is difficult to do. Using a different frequency band or spreading
code (as in CDMA) for each transmitter is another choice. This necessitates a distributed channel
selection process as well as channel information distribution.

Security
Security is a quality-of-service feature. Unauthorized accesses and usages can breach the QoS
agreements if protection is inadequate. Because of the existence of broadcasts in wireless networks,
there could be more security risks. The insecurity of the physical means of communication is intrinsic. As
a result, security-aware routing algorithms for ad hoc networks are needed. Please use a 9-point Times
Roman font, or another Roman font with serifs, that looks as similar as possible to the Times Roman
used to create these guidelines. As you can see, the target is to have a 9-point email. Only use sans-serif
or nonproportional fonts for specific purposes, such as separating source code. If Times Roman isn't
available, try Computer Modern Roman instead. Using the Times font on your Macintosh. Right margins
should not be ragged, nor justified.
34

Conclusion and Future work


The aim of this study is to assess the impact of various QoS schemes in a TCP/IP communication network
dedicated to WAMS systems on the delay, jitter, and packet loss of various traffic flows. The TCP/IP
traffic data and communication network details of typical TSOs have been analyzed and modelled using
the simulator tool OPNET for this purpose. The outcomes of various scenarios for understudied QoS
schemes were gathered and analyzed. In general, factors such as time and technological aspects play a
role in limiting the options for conducting this master thesis. The following are the limitations that were
discovered to be important to this work: The simulation tool used has technical limitations. The
implementation of some QoS schemes was subjected to some limitations. Due to security concerns,
information access is limited. The major challenges involved in designing a QoS routing protocol, as well
as the various classifications, metrics assessment, and comparison of QoS Aware routing protocols for
Ad hoc Wireless Networks, were discussed. The main goal in providing Qos in an Ad hoc wireless
network must be to deal with dynamically changing network topology, a lack of precise state
information, a lack of a channel controller, an error-prone shared radio channel, a restricted power
supply, a hidden terminal problem, and an unstable medium. A QoS architecture will not be complete
without QoS routing. A review of all routing protocols has been presented in this report, as well as the
strengths and disadvantages of these protocols, in order to explore potential research areas. There are a
range of unanswered problems that need to be tackled when designing QoS routing protocols for mobile
ad-hoc networks, according to the findings. Maximization of QoS routing protocol precision,
minimization of control overhead, route maintenance, resource reservation, cross-layer architecture,
power consumption, and robustness and protection are among these goals. To address current QoS
routing issues in MANETs, new QoS routing protocols must be designed and developed, allowing future
ad-hoc networks to meet user expectations.

References

[1] D.B. Johnson, D.A. Maltz, and Y. Hu, "The dynamic source routing protocol for mobile ad hoc

networks (DSR)", IETF Draft, July 2004.

[2] C.E. Perkins, E.M. Royer, and S.R. Das, "Ad hoc on-demand distance vector (AODV) routing protocol",

IETF Draft, RFC 3561, February 2003.


35

[3] See Park and S. Corson, "Ordered Temporary Routing Algorithm (TORA) Version 1 of the Functional

Specification", IETF Draft, RFC 2026, July 2001.

[4] C. Shigang, "Routing Support to Provide End-to-End Guaranteed Quality of Service", Ph.D.

dissertation, Department of Computer Science, University of Illinois, Urbana-Champaign, May

1999.

[5] S. Chakrabarti and A. Mishra, "QoS issues in ad-hoc wireless networks," IEEE Communication

Magazine, vol. 39, n. 2, pp. 142-148, February 2001.

[6] J. N Al-Karaki and A.E. Kamal, "Quality of Service routing in mobile ad hoc networks: Current and

future trend," Mobile Computing Handbook, CRC Publishers, 2004.

[7] T. B. Reddy, I. Karthigeyan, B. Manoj and C.S.R. Murthy, "Quality of service provisioning in ad hoc

wireless network: a survey of issues and solutions", Journal of Ad hoc Networks, Vol. 4, pp. 83-

124, 2006.

[8] R. Asokan, "A review of Quality of Service (QoS) routing protocols for mobile ad hoc networks", in

Proc. IEEE International Conference on Wireless Communication and Sensor Computing (ICWCSC

2010), Chennai, India, pp 1- 6, January 2-4, 2010.

[9] C. Wu, F. Zhang, and H. Yang, "A Novel QoS Multipath Path Routing in MANET", JDCTA: International

Journal of Digital Content Technology and its Applications, Vol.4, No.3, pp. 132 - 136, 2010.

[10] M. K. Marina and S.R. Das, "Multipath Distance Vector Routing on Demand in Ad Hoc Networks", in

Proc. Ninth IEEE International Conference on Network Protocols, pages 14-23, November 2001

[11] Davood Babazadeh, Mustafa Chenine, Kun Zhu, Lars Nordstrom, "A Platform for Wide Area

Monitoring and Control System "ICT Analysis and Development, Department of Industrial

Information and Control Systems, School of Electrical Engineering, KTH- Royal Institute of

Technology
36

[12] Davood Babazadeh, Moustafa Chenine, Kun Zhu, Lars Nordström, "Real-Time Smart Grid Application

Testing using OPNET SITL ", IEEE

[13] Bindeshwar Singh, N.K. Sharma, A.N. Tiwari, K.S. Verma, S.N. Singh, "Applications of Phasor Units of

Measurement (PMU)in networks of power supply systems incorporated with FACTS controllers.”

Department of Electrical Engineering, Kamal Nehru Institute of Technology, Sultanpur-228118

(UP), INDIA. Department of Electrical and Electronic Engineering, KIET, Murad agar, Ghaziabad

(UP), INDIA, Department of Electrical Engineering, Madan Mohan Malviya Engineering College,

Gorakhpur-273010 (SU), INDIA Department of Electrical Engineering, Indian Institute of

Technology, Kanpur (SU), INDIA., (2011),

[14] Nadeem Unuth, "Mean Opinion Score (MOS) - A Measure of Voice Quality", About.com guide

[15] Davood Babasaheb, "Modeling of wide-area monitoring system as a cyber-physical system", A

Master Thesis Report written in collaboration with the Department of Industrial Information and

Control Systems Royal Institute of Technology, Stockholm, Sweden, [April 2012]

[16] Synchronized Phasor Units (PMU) technology for measuring a large area of the power supply

system

[17] http://www.omnisecu.com/tcpip/tcpip-model.htm

[18] Krish Narendra, Tony Weekes, "Phasor Measurement Unit (PMU) Communication Experience in a

Utility Environment”, ERL Phase power Technologies Ltd, Manitoba Hydro, 2008

You might also like