A Comparative Study in QOS Protocols
A Comparative Study in QOS Protocols
Name ()
Subject
Department
Supervisor
2
DECLARATION
Table of Contents
Introduction................................................................................................................................................6
Characteristic of QoS..................................................................................................................................8
Classification of QoS...................................................................................................................................9
Based on the Quality of Service (QoS) strategy used.............................................................................9
a) Based on how the routing protocol interacts with the QoS provisioning process....................9
Based on how the Routing Protocol and the MAC Protocol communicate.....................................10
In accordance with the routing information update process in use.................................................11
Evaluation metric for QoS protocols........................................................................................................12
Criteria of QoS Routing protocol classification.........................................................................................13
Single constrained vs. multi constrained metrics.....................................................................................14
Hard QoS vs. Soft QoS...............................................................................................................................14
QOS protocols...........................................................................................................................................15
First-in, First-out (FIFO) Queuing..........................................................................................................15
Priority Queuing (PQ)............................................................................................................................16
Custom queuing (CQ)............................................................................................................................19
Weighted fair queuing (WFQ)...............................................................................................................20
Deficit Weighted Round Robin (DWRR)...............................................................................................23
DWRR implementation and application...............................................................................................25
Modified deficit Round Robin Queuing Discipline (MDRR)..................................................................25
Modified Weighted Round Robin (MWRR)..........................................................................................26
QOS-AWARE Routing Protocols................................................................................................................27
CEDAR...................................................................................................................................................27
Multipath Routing Protocol..................................................................................................................28
Genetic Algorithm based QoS Routing protocol for MANETS (GAMAN).............................................28
Predictive location based QoS routing Mobile ad Hoc Network (PLBQR)...........................................29
QoS Multicast Routing protocol with Dynamic group topology (QMRPD)..........................................29
QoS optimized Link State Routing (QOLSR)..........................................................................................30
Issues and challenges while providing QOS in network...........................................................................31
Unreliable channel................................................................................................................................31
Maintenance of route...........................................................................................................................31
Mobility of node...................................................................................................................................32
Limited of power supply.......................................................................................................................32
4
Abstract
This article provides a comprehensive overview of QoS routing metrics, tools, and factors affecting QoS
routing protocol performance. Current QoS routing protocols are also examined and compared in terms
of their relative power, weakness, and applicability. QoS routing protocols are categorized based on the
QoS metrics that are used and the form of QoS guarantee that is given. The main aim of QoS
provisioning is to achieve more deterministic network activity so that network information can be
delivered more efficiently and network resources can be better exploited. Nowadays, the Internet only
provides the best possible service. Traffic is transmitted as quickly as possible, but there is no guarantee
of timeliness or packet delivery during transmission. With the rapid transformation of the Internet into a
commercial system, customer expectations for service quality have risen at a rapid pace. People in
today's world are heavily reliant on network services such as VOIP, video conferencing, and file transfer.
Those facilities make use of a variety of Traffic Management systems. One of the most critical
mechanisms in a traffic management system is queuing. Each router in the network must enforce some
sort of queuing discipline that governs how packets are buffered while awaiting transmission. The main
goal of this paper is to illustrate various queuing disciplines' quality of service (QoS) research. The main
aim of QoS provisioning is to achieve more deterministic network activity so that network data can be
distributed more reliably and network resources can be better used.
6
7
Introduction
Quality of Service (QoS) is a group of technologies that work together on a network to ensure that high-
priority applications and traffic are reliably delivered even when network capacity is reduced. This is
accomplished using QoS technologies, which provide differential handling and power allocation of
individual network traffic flows. This allows the network administrator to monitor the order in which
packets are processed as well as the amount of bandwidth available to that application or traffic flow.
Bandwidth (throughput), latency (delay), jitter (variance in latency), and error rate are all important QoS
metrics. As a result, QoS is especially important for high-bandwidth, real-time traffic like VoIP, video
conferencing, and video-on-demand, which are sensitive to latency and jitter. These applications are
referred to as "inelastic" because they have minimum bandwidth requirements and maximum latency
limits. Queuing and bandwidth control are the QoS systems for ordering packets and allocating
bandwidth, respectively. However, traffic must be differentiated using classification tools before they
can be enforced. Organizations can ensure continuity and sufficient resource availability for their most
critical applications by classifying traffic according to policy.
Traffic can be categorized in a number of ways, such by port or IP address, or by programme or user. The
latter parameters allow for more accurate identification and, as a result, data classification.
Following that, rules are delegated to queuing and bandwidth management software to handle traffic
flows based on the classification they got upon joining the network. Packets in traffic flows may be
queued before the network is able to process them, thanks to the queuing system. Priority Queuing (PQ)
was created to ensure that the most critical batches of applications and traffic have the required
availability and minimal latency of network performance by assigning them a priority and specific
bandwidth based on their classification. This means that the most critical operations on a network do
not suffer from a lack of bandwidth due to lower-priority activities. Users, applications, and traffic can be
divided into up to eight distinct queues.
Bandwidth management systems monitor and regulate traffic flows on the network in order to prevent
the network from exceeding its capacity and resulting in network congestion. Traffic shaping, a rate
limiting technique used to maximize or guarantee efficiency and increase available bandwidth where
possible, and scheduling algorithms, which provide a variety of methods for supplying bandwidth to
8
particular traffic flows, are two mechanisms for bandwidth management. The above facilities and
controls can be handled and consolidated down to a single box, depending on the provider. QoS via Palo
Alto Networks firewalls is an example of this. Differentiated Services Code Point (DSCP) is used to
communicate QoS steps and classification outside the box and to downstream network infrastructure.
DSCP assigns a classification to each packet and communicates this to each box it passes through,
ensuring that QoS policy is applied consistently.
The tendency of a network to support good providers in order to accept good customers is known as
QoS. In other words, QoS assesses user satisfaction as well as network efficiency. Although some
applications, such as FTP, HTTP, video conferencing, and e-mail, are unaffected by delays in transmitted
information, others, such as voice and video, are more vulnerable to information loss, delay, and jitter.
As a result, VoIP QoS is essential to ensure that voice packets are not lost or delayed while being
transmitted over the network. To improve VoIP QoS, different parameters such as (delay, jitter, and
packet loss) are calculated according to ITU guidelines. These parameters can be modified and
monitored within a reasonable range to improve VoIP QoS.
QoS routing protocols necessitate not only finding a route from point A to point B, but also a route that
meets the end-to-end QoS specifications, which are often expressed in terms of bandwidth (or) latency.
A network or service provider may provide users with a variety of services. A service can be defined in
this way by a collection of observable pre-specified service specifications, such as minimum bandwidth,
maximum latency, maximum delay variance (jitter), and maximum packet loss rate. After receiving a
service request from a user, the network must ensure that the user's flow's service requirements are
met, as agreed, during the flow's length (a packet from the source to the destination). The first task after
receiving a service request from a customer is to find a loop-free route from source to destination that
has the required resources to fulfil the QoS requirements of the requested service. QoS routing is the
term for this method.
In terms of jitter, reliability, delay, and bandwidth, different applications and data flows have different
requirements. The data flows and applications' Quality of Service (QoS) is determined by these
characteristics. Different streaming services, such as VoIP or video, can allow a certain amount of delay,
but not jitter, since even a small amount of jitter would affect the picture noticeably. However, since IP
networks are Best-Effort networks, none of these characteristics can be assured for an application or a
data flow. As a result, the aim of QoS is to provide predictability in performance for various applications.
9
There are several different types of delays that can be calculated in a network. Some of the more
popular are mentioned below:
Propagation delay is the amount of time it takes a packet to travel through a medium.
Queuing delay is the amount of time a packet spends waiting in a queue.
A packet's processing delay is the amount of time it spends in a router.
Delay in packetization – the time it takes to convert data into packets.
Serialization – the amount of time it takes to ensure that packets are sent in the correct order.
By changing the propagation media and updating the hardware of the servers and routers, these delays
can be reduced. However, since this is a cost problem, it may not be an option, so implementing QoS
may boost the application's delay. Jitter is another word for delay variance, and it refers to the
difference in delay between different packets. The jitter is 1ms if the second packet has a delay of 2ms
and the third has a delay of 3ms. The jitter is calculated by subtracting the delays and then adding the
results. The "jitter measurement process" is the name given to this simple formula. Even if some links
have enough bandwidth to handle a large amount of traffic, several smaller flows at key aggregation
points in the network may cause congestion. Other connections can have different upload and
download rates, causing congestion and packet loss as a result. Congestion control, which manages the
queuing and dropping of packets, is an important part of QoS. Packets are only dropped when the
hardware or software's buffer has been filled. We can use Integrated or Differentiated Services to
identify the ap to ensure, or at least increase the probability, that none of the more significant packets
are lost.
In essence, the Integrated Service uses the Resource Reservation Protocol to reserve bandwidth for the
various applications (RSVP). The applications inform the network of the type of QoS they need, causing
the network to set aside bandwidth for them. The issue with integrated networks is that they are not
scalable, and all affected systems must support RSVP.
Characteristic of QoS
The major challenges in delivering QoS would result in higher computational and communication costs.
To put it another way, it takes longer to set up a link and keeps more state details per connection.
Although providing QoS for MANETs, the increase in network consumption counterbalances the increase
in state knowledge and the related complexity, and numerous issues must be addressed. The following
are the main issues that need to be addressed:
10
Classification of QoS
1. Based on the Quality of Service (QoS) strategy used.
2. On the basis of the layer.
3. Other Quality of Service (QoS) Options
a) Based on how the routing protocol interacts with the QoS provisioning process.
Coupled Qos Approach and Decoupled Qos Approach are two types of QoS approaches that can
be defined as follows:
• INSIGNIA
Based on how the Routing Protocol and the MAC Protocol communicate.
As shown below, there are two types of QoS approaches: independent QoS approaches and dependent
QoS approaches.
specifications. The metrics mentioned below are some of the most common metrics used by
applications to define QoS requirements to routing protocols.
Minimum Throughput (bps) – the optimal data throughput for the application.
Maximum Delay (s) – the maximum end-to-end delay for data packets that can be tolerated.
Maximum Delay Jitter – the difference between the upper bound and the absolute minimum
end-to-end delay.
Maximum Packet Loss Ratio - the percentage of total packets sent that are not received by the
final destination node that is permissible. A metric's value over the entire path will take one of
the following forms:
Additive metrics: The following is a statistical representation:
Where m (p) represents the total number of metric m in the path (p), lki represents a point in the path
(p), LK represents the number of links in the path (p), and i= 1,...LK This form of composition includes
things like delay, delay variance (jitter), and expense. Various factors that influence communication
network delay are discussed in.
Metrics that are convex: This can be expressed as m(p)=max (m(lki)) where m(p)=max (m(lki))
where m(p)=max (m(lki)) where m(p)=max (m(lki) The convex rule is used to calculate
vulnerability (in the sense of security) and throughput. Whatever metrics are used to determine
the direction, they must represent the basic network properties that are of interest. Residual
bandwidth, delay, and jitter are examples of such metrics. The metrics describe the types of
QoS guarantees the network will accept since the flow QoS specifications must be mapped into
path metrics. QoS-based routing, on the other hand, is unable to accept QoS r.
trend in this area is to shift away from single-constrained routing and toward multi-constrained routing.
The trend in this area is to shift away from single-constrained routing and toward multi-constrained
routing. The main goal of multiconstrained QoS routing is to find a feasible route that meets several
constraints at the same time, which is a difficult task in MANETs where the topology is constantly
changing. Such a problem has been shown to be NP-complete. QMRPD (QoS Multicast Routing Protocol
for Dynamic Group Topology) [33] is a QoS Multicast Routing Protocol for Dynamic Group Topology.
Typical multi constrained routing protocols include GAMAN (Genetic Algorithm-based Routing for
MANETs) and HMCOP (Heuristic Multi Constrained Optimal Path).
When several links of varying bandwidth bind to a router, for example, congestion occurs. If the
incoming data rate exceeds the outgoing data rate, packets will queue until the router buffer is
complete, at which point all incoming packets will be lost. The queueing in the buffer, as well as the
retransmission of TCP packets, are two causes of delay. Different schemes may be used to combat the
impact of traffic congestion. Congestion prevention, congestion avoidance, and congestion detection are
the three groups under which these schemes fall.
Congestion management
QOS protocols
When opposed to more elaborate queue scheduling disciplines, FIFO queuing places an
incredibly low computational burden on software-based routers
A FIFO queue's behaviour is very predictable—packets are not reordered, and the maximum
delay is determined by the queue's maximum width.
FIFO queuing provides easy contention resolution for network resources while not adding
substantially to the queuing delay faced at each hop as long as the queue depth remains low.
A single FIFO queue does not enable routers to arrange buffered packets and then service one
class of traffic differently than other classes of traffic.
Since the mean queuing delay for all flows increases as congestion increases, a single FIFO
queue has an equivalent effect on all flows. As a result, real-time applications traversing a FIFO
queue can experience increased delay, jitter, and loss as a result of FIFO queuing.
FIFO queuing favors UDP flows over TCP flows during times of congestion. TCP-based
applications decrease their transmission rate when packet loss occurs due to congestion, but
UDP-based applications are unaffected by packet loss and continue to transmit packets at their
normal rate. FIFO queuing will cause increased latency, jitter, and a reduction in the amount of
output bandwidth consumed by TCP applications traversing the queue because TCP-based
applications slow their transmission rate to adjust to changing network conditions.
A bursty flow will fill up a FIFO queue's buffer space, preventing all other flows from receiving
service until the burst is completed. Other well-behaved TCP and UDP flows traversing the
queue can experience increased delay, jitter, and loss as a result of this.
17
A bursty flow will fill up a FIFO queue's buffer space, preventing all other flows from receiving service
until the burst is completed. Other well-behaved TCP and UDP flows traversing the queue can
experience increased delay, jitter, and loss as a result of this.
When opposed to more elaborate queuing disciplines, PQ places a relatively low computational
burden on software-based routers.
PQ enables routers to arrange buffered packets and then service one traffic class differently
than other traffic classes. You may, for example, set priorities such that real-time applications,
such as interactive voice and video, take precedence over non-real-time applications.
Lower-priority traffic may experience unnecessary delay as it waits for unbounded higher-
priority traffic to be serviced if the amount of high-priority traffic is not policed or conditioned at
the network's edges.
Lower-priority traffic may be dropped if the buffer space allocated to low-priority queues begins
to overload if the amount of higher-priority traffic becomes overwhelming. If this happens, the
combination of packet fall, increased latency, and packet retransmission by host systems could
eventually result in total resource depletion for lower-priority traffic. Strict PQ will establish a
network environment in which a decrease in the level of service provided to the highest-priority
service is postponed until the entire network is dedicated to processing only the highest-priority
service class.
A misbehaving high-priority flow will greatly increase the amount of jitter and delay faced by
other high-priority flows in the queue.
19
During times of congestion, PQ is not a solution to solve the restriction of FIFO queuing, which
favors UDP flows over TCP flows. TCP window management and flow control systems will
attempt to use all of the available bandwidth on the output port if you use PQ to position TCP
flows in a higher-priority queue than UDP flows, starving your lower-priority UDP flows.
At the edges and in the centre of your network, there are two key applications for PQ:
By allowing you to delegate routing-protocol and other forms of network-control traffic to the
highest-priority queue during times of congestion, PQ can improve network stability.
PQ enables the delivery of a service class with high throughput, low latency, low jitter, and low
loss. This capability enables you to deliver real-time applications like interactive voice or video,
as well as support TDM circuit emulation and SNA traffic, by prioritizing these services over all
others.
To avoid high-priority queues from being oversubscribed, you must effectively condition traffic at the
network's edges to serve these types of services. If you skip this step in the design phase, you'll find that
supporting these programmes is difficult. The main problem is that it's much easier to condition traffic
and assign bandwidth to a queue for some applications than it is for others. For example, provisioning
resources for a well-defined application like VoIP, where you know the packet size, traffic volume, and
traffic behaviour, is much simpler than provisioning resources for other types of applications like
interactive video, where there are simply too many variables. The existence of these unknowns makes
configuring traffic conditioning levels, maximum queue depths, and bandwidth limits for high-priority
queues extremely difficult.
Custom queuing is not widely used in today's networks, but it helps network engineers to ensure that
each application gets a guaranteed percentage of the connection, in this case for Telnet, SNA, and FTP.
However, unless high-priority traffic is divided into separate conversations using LOCADDR prioritization,
SNA would share the same output queue, potentially slowing down interactive SNA response times. It
does not guarantee a delay, and custom queuing cannot be used in time-sensitive applications such as
video and voice, where delays are not accepted, so it is not recommended for such applications.
A.K.J. Parekh demonstrated in 1992 that WFQ can provide strong upper-bound, end-to-end delay for
sessions formed at the network's edges by token or leaky bucket rate regulation.
it protects each service class by providing a minimum level of output port bandwidth regardless
of the actions of other service classes.
22
WFQ guarantees a weighted equal share of output port bandwidth to proper service class with
a bounded delay when combined with traffic conditioning at the network's edges.
WFQ implementations by vendors are software-based rather than hardware-based. WFQ can
only be used on low-speed interfaces at the network's edges because of this.
A misbehaving flow within a highly aggregated service class can have an effect on the output of
other flows within the same service class.
WFQ uses a complicated algorithm that necessitates the storage of a large amount of per-
service class state as well as iterative state scans on each packet arrival and departure.
When trying to accommodate a large number of service groups on high-speed interfaces,
computational complexity has an effect on WFQ's scalability.
When considering the small amount of serialization delay introduced by high-speed links and
the lower computational requirements of other queue scheduling disciplines, reducing delay to
the granularity of a single packet transmission might not be worth the computational cost on
high-speed interfaces.
Finally, while WFQ's guaranteed delay bounds are better than those provided by other queue
scheduling disciplines, they can still be very high.
Enhancement in WFQ
23
Since its inception in 1989, several variations of WFQ have been created, each with its own
set of trade-offs aimed at balancing complexity, accuracy, and efficiency. These four WFQ
variants are among the most well-known:
Class based WFQ assigns packets to queues based on packet classification criteria specified
by the user. The setting of the IP precedence bits, for example, will allocate packets to a
specific queue. After packets are allocated to queues, they will receive prioritized service
based on user-configured weights for each queue.
Self-recording Fair Queuing (SCFQ) is a WFQ enhancement that makes measuring the finish
time in a corresponding GPS system easier. If the complexity of the system decreases, the
worst-case delay increases, and the delay increases with the number of service classes.
Worst-case scenario Fair Weighted Fair Queuing (WF2Q) is an improvement on WFQ that
uses both packet start and finish times to simulate a GPS device more accurately.
Worst-case scenario Fair Weighted Fair Queuing+ (WF2Q+) is an improved version of WF2Q
that uses a new virtual time feature to reduce complexity and improve accuracy.
WFQ is used at the network's edges to ensure that bandwidth is distributed evenly among a variety of
service classes. WFQ can be set up to accommodate a number of different behaviours:
M. Shreedhar and G. Varghese suggested deficit weighted round robin (DWRR) queuing in
1995. The DWRR model is the foundation for a class of queue scheduling disciplines that resolve
the shortcomings of the WRR and WFQ models.
The WFQ model's shortcomings are addressed by DWRR, which defines a scheduling
discipline with lower computational complexity and hardware implementation. This
enables DWRR to facilitate output port bandwidth arbitration on high-speed interfaces
in both the core and the edge.
Every queue in DWRR queuing is configured with a set of parameters:
The percentage of the output port bandwidth allocated to the queue is defined by this weight.
A Deficit Counter that determines the maximum number of bytes that the queue is allowed to
transmit each time the scheduler visits it. The Deficit Counter allows a queue that was not
allowed to transmit in the previous round because the packet at the front of the queue was
greater than the Deficit Counter's value to save transmission "credits" for use in the next service
round.
A unit of operation that is proportional to the weight of the queue and is measured in bytes.
Every time the scheduler visits a queue, the Deficit Counter for that queue is incremented by the
quantum. When both queues are active, and quantum[i] = 2*quantum[x], queue I will obtain
twice the bandwidth as queue x.
It protects various flows by ensuring that a badly behaved service class in one queue does
not affect the efficiency of other service classes allocated to other queues on the same
output port;
When forwarding variable-length packets, it overcomes the limitations of WRR by allowing
precise control over the percentage of output port bandwidth allocated to each service
class.
To prevent bandwidth starvation, it overcomes the limitations of strict PQ by ensuring that
all service groups have access to at least some configured amount of output port
bandwidth; and
From a computational standpoint, implements a relatively simple and inexpensive algorithm
that does not necessitate the maintenance of a large amount of per-service class state.
Because of the highly aggregated nature of service classes, a misbehaving flow within one will
affect the output of other flows within the same service class. However, routers are expected to
schedule aggregate flows in the centre of a broad IP network because the large number of
individual flows makes per-flow queue scheduling disciplines impractical.
Other queue scheduling disciplines, such as DWRR, may not be as reliable. The accuracy of
bandwidth allocation is less important over high-speed links than it is over low-speed links.
Fig. DWRR
26
The MDRR (Modified Deficit Round Robin) is a queuing system found only in Cisco Gigabit Switch
Routers. Despite this, the queuing approach is supported by any router in the OPNET simulation
software. The MDRR queue classifies packets based on their IP precedence area, allowing it to map up to
eight different classes, each of which can contain multiple flows. Each queue has a fixed bandwidth and
serves packets in a FIFO manner, with tail-drop and WRED support.
MDRR introduces a new parameter called the Quantum Value (QV), which is the sum of the weight and
the MTU. The initial deficit value of the queue is set to this QV, and it is reduced by a value equal to the
packet length in bytes. MDRR can accommodate up to eight queues, with all but one being served round
robin. With one exception, the queue is a low-latency queue that can be set to either strict or alternative
priority mode. The distinction between the two is that in strict priority mode, the low-latency queue is
27
always served as long as it is not empty, while in alternative priority mode, the low-latency queue is
served first, then one of the other queues, then the low-latency queue, and so on. As with the regular
PQ, it's vital not to overload the low-latency queue with too many or too large flows, as this can cause
starvation in the other queues. The key distinction between the MDRR and DWRR queuing approaches is
the MDRR's low-latency queue.
Unempty queues are served one by one in a round robin way with modifiable deficit round robin
queuing discipline. A fixed date is dequeued while a queue is being served. Then the next queue
serves round robin algorithms. When the queue is served, MDDR maintains information that has
been dequeued above a configured value for the number of bytes of data. Next time, less data is
served to compensate for the excess data previously served when queue is again served. The
average quantity of data being dequeued is therefore near the configured value. The MDRR
maintains even a pre-emptive priority queue. The MDRR has features:
The average number of bytes served in each round is known as the quantum value.
Deficiency counter – this counter keeps track of how many bytes a queue has transmitted in
each round. The quantum value was used to start the deficit counter.
Every queue's packets are served until the deficit counter value exceeds zero. After serving the
packet, the deficit counter's value decreases by the same amount as its duration in bytes.
After the deficit counter reaches zero, a queue will no longer be served. Then a new round begins, with
the deficit counter in the non-empty queue being increased by the quantum value. Each MDRR queue is
assigned a weight, and any of the queues can be designated as a priority queue. When the interface is
congested, the weight assigns each queue relative bandwidth. If there is data in queue that needs to be
sent, the MDRR algorithm dequeues data in a round robin fashion from each queue.
CEDAR
In ad hoc networks, the Core-Extraction Distributed Ad hoc Routing (CEDAR) algorithm is proposed for
QoS routing. To locate and avoid congested areas of the network, elected subset nodes advertise
bandwidth information along with their connection state updates. When a connection fails, CEDAR's
route re-computation is limited to the area surrounding the failure. Extraction of the core: A group of
nodes is chosen to form the nucleus, which is responsible for maintaining the local topology of the
nodes in its domain as well as route computations. The core nodes are chosen by approximating the ad
hoc network's minimum dominant set1. Propagation of connection state: CEDAR achieves QoS routing
by propagation. The basic concept is that information about secure high-bandwidth links can be
broadcast to nodes located far away in the network, while information about dynamic or low-bandwidth
links stays local. Calculation of the route: The core path from the source's domain to the destination's
domain is first established during route computation. CEDAR iteratively tries to find a partial route from
the source to the domain of the farthest possible node in the core path that meets the requested
bandwidth using the directional information given by the core path. MRP (Multipath Routing Protocol) is
29
a reactive on-demand routing protocol that extends the DSR protocol to find multipath routing with
bandwidth and reliability constraints. It is divided into three stages: exploration, maintenance, and
traffic allocation. The protocol selects many different alternate paths that meet the QoS criteria during
the routing discovery process, and the optimal number of multipath routing is achieved to balance load
balancing and network overhead. Similar to DSR, it can effectively deal with route failures during the
routing maintenance process. Furthermore, the per-packet granularity is adopted in traffic allocation
phase.
where Di and Ri are the deferral and dependability of connection I, decently. The wellness esteems are
utilized to choose ways for get over reproducing and transformation activities. The fittest way (with the
littlest T) and the posterity from the hereditary tasks are conveyed forward into the future. While this
strategy is a helpful heuristic for approximating the ideal worth over the postponement and connection
30
Geography disclosure is utilized to decide the geography of the whole organize and ascertain steering
tables. OLSR utilizes four message types: Hello message, Topology Control (TC) message, Multiple
Interface Declaration (MID) message, and Host and Network Association (HNA) message. Hi messages
are utilized for neighbor detecting. Geography assertions depend on TC messages. MID messages
contain numerous interface addresses and play out the errand of different interface announcements.
Since has that have various interfaces associated with various subnets, HNA messages are utilized to
pronounce have and related organization data. Expansions of message types may incorporate force
saving mode, multicast mode, and so on.
Unreliable channel
The key issue that occurs as a result of the insecure wireless channels is bit errors. Owing to high
interference, thermal noise, multipath fading effects, and other factors, these channels have a high bit
error rate. As a result, the packet distribution ratio is poor. Since MANETs use wireless technology, there
is a risk of information leakage into the environment.
Maintenance of route
The evolving behaviour of the communication medium and the complex design of the network topology
make maintaining network state knowledge extremely difficult. Even during the data transfer process,
proven routing paths can be broken. As a result, maintaining and reconstructing routing paths with
minimal overhead and delay causes is needed. The QoS aware routing will necessitate resource
reservations at intermediate nodes. Reservation management becomes more difficult as topology
changes.
33
Mobility of node
Since the nodes under consideration are mobile, that is, they travel in any direction and at any speed,
the topology information must be modified regularly and appropriately in order to provide routing to
the final destination, resulting in a lower packet delivery ratio.
Channel contention
To provide the network topology, nodes in a MANET must communicate with each other on a common
channel. This, however, raises issues such as interference and channel contention. These can be avoided
in a variety of ways for peer-to-peer data communications. One method is to use a TDMA-based device
with global clock synchronization and each node transmitting at a predetermined time. Since there is no
centralized control on the nodes, this is difficult to do. Using a different frequency band or spreading
code (as in CDMA) for each transmitter is another choice. This necessitates a distributed channel
selection process as well as channel information distribution.
Security
Security is a quality-of-service feature. Unauthorized accesses and usages can breach the QoS
agreements if protection is inadequate. Because of the existence of broadcasts in wireless networks,
there could be more security risks. The insecurity of the physical means of communication is intrinsic. As
a result, security-aware routing algorithms for ad hoc networks are needed. Please use a 9-point Times
Roman font, or another Roman font with serifs, that looks as similar as possible to the Times Roman
used to create these guidelines. As you can see, the target is to have a 9-point email. Only use sans-serif
or nonproportional fonts for specific purposes, such as separating source code. If Times Roman isn't
available, try Computer Modern Roman instead. Using the Times font on your Macintosh. Right margins
should not be ragged, nor justified.
34
References
[1] D.B. Johnson, D.A. Maltz, and Y. Hu, "The dynamic source routing protocol for mobile ad hoc
[2] C.E. Perkins, E.M. Royer, and S.R. Das, "Ad hoc on-demand distance vector (AODV) routing protocol",
[3] See Park and S. Corson, "Ordered Temporary Routing Algorithm (TORA) Version 1 of the Functional
[4] C. Shigang, "Routing Support to Provide End-to-End Guaranteed Quality of Service", Ph.D.
1999.
[5] S. Chakrabarti and A. Mishra, "QoS issues in ad-hoc wireless networks," IEEE Communication
[6] J. N Al-Karaki and A.E. Kamal, "Quality of Service routing in mobile ad hoc networks: Current and
[7] T. B. Reddy, I. Karthigeyan, B. Manoj and C.S.R. Murthy, "Quality of service provisioning in ad hoc
wireless network: a survey of issues and solutions", Journal of Ad hoc Networks, Vol. 4, pp. 83-
124, 2006.
[8] R. Asokan, "A review of Quality of Service (QoS) routing protocols for mobile ad hoc networks", in
Proc. IEEE International Conference on Wireless Communication and Sensor Computing (ICWCSC
[9] C. Wu, F. Zhang, and H. Yang, "A Novel QoS Multipath Path Routing in MANET", JDCTA: International
Journal of Digital Content Technology and its Applications, Vol.4, No.3, pp. 132 - 136, 2010.
[10] M. K. Marina and S.R. Das, "Multipath Distance Vector Routing on Demand in Ad Hoc Networks", in
Proc. Ninth IEEE International Conference on Network Protocols, pages 14-23, November 2001
[11] Davood Babazadeh, Mustafa Chenine, Kun Zhu, Lars Nordstrom, "A Platform for Wide Area
Monitoring and Control System "ICT Analysis and Development, Department of Industrial
Information and Control Systems, School of Electrical Engineering, KTH- Royal Institute of
Technology
36
[12] Davood Babazadeh, Moustafa Chenine, Kun Zhu, Lars Nordström, "Real-Time Smart Grid Application
[13] Bindeshwar Singh, N.K. Sharma, A.N. Tiwari, K.S. Verma, S.N. Singh, "Applications of Phasor Units of
Measurement (PMU)in networks of power supply systems incorporated with FACTS controllers.”
(UP), INDIA. Department of Electrical and Electronic Engineering, KIET, Murad agar, Ghaziabad
(UP), INDIA, Department of Electrical Engineering, Madan Mohan Malviya Engineering College,
[14] Nadeem Unuth, "Mean Opinion Score (MOS) - A Measure of Voice Quality", About.com guide
Master Thesis Report written in collaboration with the Department of Industrial Information and
[16] Synchronized Phasor Units (PMU) technology for measuring a large area of the power supply
system
[17] http://www.omnisecu.com/tcpip/tcpip-model.htm
[18] Krish Narendra, Tony Weekes, "Phasor Measurement Unit (PMU) Communication Experience in a
Utility Environment”, ERL Phase power Technologies Ltd, Manitoba Hydro, 2008