Microelectronics Electromagnetics and Telecommunications Proceedings of The Fourth Icmeet 2018 Compress
Microelectronics Electromagnetics and Telecommunications Proceedings of The Fourth Icmeet 2018 Compress
Ganapati Panda
Suresh Chandra Satapathy
Birendra Biswal · Ramesh Bansal
Editors
Microelectronics,
Electromagnetics and
Telecommunications
Proceedings of the Fourth ICMEET 2018
Lecture Notes in Electrical Engineering
Volume 521
Editors
Microelectronics,
Electromagnetics
and Telecommunications
Proceedings of the Fourth ICMEET 2018
123
Editors
Ganapati Panda Birendra Biswal
School of Electrical Sciences Electronics and Communication Engineering
Indian Institute of Technology Bhubaneswar Gayatri Vidya Parishad College of
Bhubaneswar, Odisha, India Engineering (Autonomous)
Visakhapatnam, Andhra Pradesh, India
Suresh Chandra Satapathy
School of Computer Engineering Ramesh Bansal
KIIT Deemed to be University Electronics and Communication Engineering
Bhubaneswar, Odisha, India University of Pretoria
Pretoria, South Africa
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Organizing Committee
Chief Patron
Patrons
General Chairs
v
vi Organizing Committee
Program Chair
Organizing Chairs
Organizing Co-chairs
ix
x Preface
Assistant Prof. Mr. G. Anand Kumar for making this conference a grand success.
We truly appreciate Dr. V. Leela Rani and her student volunteers for showcasing
their hidden talents in the cultural show in entertaining all the delegates. We are also
thankful to Prof. C. V. K. Bhanu, Dean, Research and PG affairs, for providing
necessary support to check the content of all the submitted manuscripts for
plagiarism. Our sincere thanks to Prof. N. Balasubrahmanyam (former Head of the
Department, Electronics and Communication Engineering Department, and Present
Chairman of Board of Studies), for providing all necessary help and support for the
conference.
We gratefully acknowledge the cooperation extended by many distinguished
researchers like Prof. Ganapati Panda, Former Deputy Director, IIT Bhubaneswar;
Prof. Sukumar Mishra, IIT Delhi; Prof. P. K. Mehar, NTU Singapore; Prof. Charles
Graham Leedham, Technical Director, TUMCREATE Ltd., Singapore, for giving
altogether a new dimension to ICMEET 2018.
Our sincere thanks to all session chairs, co-chairs, and distinguished reviewers
for their timely technical support. Finally, I would like to place on record that this
conference is the outcome of combined and collective effort of all. The success of
this conference is entirely due to our teamwork.
xi
xii Contents
Dr. Birendra Biswal received his master of engineering from Veer Surendra Sai
University of Technology (VSSUT) (formerly University College of Engineering,
Burla), Sambalpur, India, in 2002 and his Ph.D. from Biju Patnaik University of
Technology (BPUT), Odisha, India, in 2009. His research interests focus on advanced
signal processing, adaptive signal processing, image processing, data mining, power
quality, and soft computing. He has published several papers in IEEE, IET, and
ScienceDirect. He is an associate fellow of Andhra Pradesh Academy of Sciences and
serves as a reviewer for many prestigious journals around the globe.
xix
xx About the Editors
Prof. Ramesh Bansal is a professor and group head (Power Group) in the
Department of Electrical, Electronic and Computer Engineering, University of
Pretoria, South Africa. Holding a master of engineering in power systems and a
Ph.D. in hybrid power systems, he has more than 25 years of experience in
teaching, research and industry. He has published more than 250 research papers,
10 books, and 10 chapters. He is an editor of many reputed journals including
IET-Renewable Power Generation and Electric Power Components and Systems.
He has received several fellowship awards, including Fellow IET, UK; Fellow
Engineers, Australia; Senior Member, IEEE; and CP Engg.,UK Fellow, Institution
of Engineers, India.
Hand Gesture-Based Quadcopter
1 Introduction
De Soto and Clinton B [1] of American Radio Relay League (ARRL) in 1936 first
carried out the concept of the remote controlled flight and had given a public demon-
stration of it. They built a sailplane which is about 13-foot wingspan length of up to
100 radio controlled flights in the summer–fall of Hartford, Connecticut. Hull started
his research on homebrew radio apparatus during that time period. He increased Radio
Transmitter (RF) transmitter efficiency by shortening the leads and was the first to
design the much lighter onboard receiver for model aircraft. Walter and William Good
won first place in 1940 and 1947 at the US National Aero modeling Championships
for developing the first RC model airplane.
A quadcopter is a rotorcraft which became very famous due to its operation in
many areas. Compared to the other multi-rotors quadcopter is cheap and simple to
design. It works mainly with four servo motors attached to it at the frame ends. These
four motors are controlled by a flight controller board, it takes instructions from the
remote transmitter or a mobile phone or from a base station computer or laptop and
then controls the quadcopter movements by maintaining the stability with the help
of sensors built inside the board. They are also called as preprogrammed missions,
comprising self-level functions coded inside the microcontroller. A quadcopter basi-
cally uses two sets of exact same kind of fixed pitch propellers; one set is clockwise
(CW) type propellers and the other set is counterclockwise (CCW) type propellers.
This helps the machine to linger in one place by maintaining stability throughout the
flight.
The remote control transmitter which directs the signals to the quadcopter wire-
lessly angles the quadcopter by adjusting the controls—thrust, yaw, roll, and pitch.
The remote receiver which receives these control signals from the transmitter hovers
the copter in space. Controlling the quadcopter with remote sticks is a very difficult
task; one needs a lot of practice before directly jumping into flying the quadcopter
in real time. Without previous knowledge of flying, if one attempts to fly the copter,
then the quad may get crashed resulting in damage to the hardware which is again a
huge loss. There are still researches going on and even some technologies are being
developed today on this control system for making a stably operated remote.
Flying the quadcopter autonomously is under development now, it basically works
through GPS system which sends the route map data taken from the satellite to
the flight controller. The controller then makes adjustments in the flight to travel
along the path provided by the GPS. We can set the destination and the home return
point in base station software designed especially for this autonomous quadcopters.
We can even capture the quadcopter movements during its flight through a camera
affixed to the copter body so that every move can be viewed live from the base
station. But this system is not proven flexible all the time due to many factors like
having less flight time, atmospheric effects, and some unexpected failures during
flight. It still requires in-depth research to overcome the drawbacks and to project
its importance in the market. Hence, people are mainly focusing on replacing the
remote with other features like controlling through hand gestures, smartphones within
built angular sensors, and some wearable systems. These technologies are gaining
popularity because they are very easy to operate and control. One does not need any
previous experience to fly these systems. Our primary motto in this project is to build
one of these kinds to make flying easier for everyone. Almost all the quadcopters are
controlled through remote and some other through autonomously. In this project, the
quadcopter is controlled wirelessly through wearable gestures.
Hand Gesture-Based Quadcopter 3
2 Literature Review
There are many gestures controlled quadcopters like controlling the quadcopter with
visual tracking technology along with a stereo camera [2], it is very sensitive to
illumination and not suitable to use at sunlight. Quadcopter based on Xbox Kinect
[3] is another technology used for controlling quadcopter but is very expensive. One
more technology is Haptic-based gesture controlled quadcopter, it uses haptic sensors
[4] placed on different fingers controlling each movement with each finger. But it
is complex and not much flexible while maintaining synchronization between each
sensor. In collocated and Wizard of oZ (WoZ) approach [5], different gestures are
taken as inputs from the public, but these mixed views create complexity in including
all gestures to program code. Using Naza M-Lite as a flight controller board [6] is
quite expensive and not a user-friendly board for a novice.
All the abovementioned works failed in achieving proper stability, flexibility,
security, simplicity, and economical design for the system. The studies referenced
in this paper focused on the above factors, along with controlling altitude, roll, and
throttle simultaneously by making user-friendly control flight. But for this particular
quadcopter yaw control is not included. It was unnecessary to control yaw because
the variable required no adjustment during flight, whereas the other three require
constant control to avoid a failed flight.
3 Methodology
This quadcopter is operated in two modes, namely, remote mode and gesture mode.
The two modes can be switched simultaneously through a knob switch. In remote
mode, quadcopter is operated normally like the basic model by balancing the control
sticks of the remote. In gesture mode, quadcopter is operated through a designed
wearable device. Quad can be controlled through the wearable mode by switching
the auxiliary knob on RC to LOW state.
Since the quad is controlled in both the modes, the wearable part is designed sep-
arately and connected with the quadcopter through Bluetooth technology. In remote
mode, RC remote with 2.4 GHz frequency is used to communicate with the quad-
copter as shown in Fig. 1.
communication with RC transmitter and pushes the data to flight controller. The
flight controller after receiving proper inputs from mega board controls the speed
of motors by providing necessary boosting to them by electronic speed controllers.
The speed of motors varies with respect to controls (throttle, yaw, rudder, and roll)
given by Tx Remote. The necessary connections with the Mega board can be seen
in Fig. 3.
Similarly, when controlling through hand gestures, the Arduino Mega takes input
from Rx Bluetooth which is connected to Tx and Rx pins on Mega board as shown
in Fig. 4. The Rx Bluetooth receives data from the Tx Bluetooth which is connected
to the Tx and Rx pins on Arduino Mini board.
In wearable setup, the AltImu sensor plays a major role; it has inbuilt gyroscope,
accelerometer, magnetometer, and barometer. These meters calculate all the angular
movements of the hand and give X- and Y-axis values. Along with this, another
sensor Ultrasonic is used to measure the distance of quad from the ground giving
Hand Gesture-Based Quadcopter 5
Fig. 3 Receiver to mega board and mega board to KK2.1 flight controller connections
Z-axis value this acts like a Throttle control. These two sensors push X, Y, and Z data
to Arduino Mini which later converts these axis values into rudder, roll, and throttle
controls, respectively.
The trigger and echo signal of distance sensor are connected to Pro-mini board
pin 2 and 3. AltImu sensor uses I2C protocol to generate communication with KK2.1
board, and Hc-05 master is connected to Pro-mini for pins 4, 5 digital software serial
pins. Figure 5 shows the connections of wearable mode.
6 R. Meena and U. Syed Abudhagir
For >95 cm distance covered from the ground, Quad will take off to a greater
height.
When the wearable is tilted to left, right, front, and back, the distance measured
by the ultrasonic sensor with respect to ground varies. To avoid this, Pro-mini is
coded with constant distance data only at tilted angles of the sensor. Yaw control is
not included in this work to avoid overcorrecting of data by the new user.
4 Results
The purpose of this work is to make the control task of quadcopter easier by main-
taining stability. During the tenure, sensors data is read and recorded for different
angles and positions. Finally, quadcopter is tested for flying with those angles and
positions for both modes until satisfactory results are obtained.
Stability: It is the major factor which is perfectly achieved by the system during
the initial phases of testing. The wearable device worked efficiently when compared
to the RC remote.
Flexibility: This setup can be used anywhere to control the quad within 50 m
radius and also very easy to handle and operate.
Security: All RC remotes are radio controlled devices; they can be easily hacked
and taken under their control. Hence, this system resolves this issue by replacing the
RC communication with Bluetooth technology giving end-to-end security to devices.
Figures 6 and 7 show the receiver side simulated data with knob switched from the
remote control mode to gesture control mode showing the x, y direction, and distance
from the ground values. The x, y represents pitch and roll. Thrust is represented by
the z-axis.
Figures 8 and 9 show the final setup and test flight demonstration of the wearable
controlled quadcopter. The wearable circuitry is held in hand by facing ultrasonic
sensor toward the ground and AltImu sensor toward the north pole direction such
that when we give hand movements by holding the circuit the mirror image of those
movements can be seen on quadcopter.
5 Conclusion
This work can be further extended in future by controlling the quadcopter with
just smartphone having an inbuilt gyroscopic sensor. Thus, using gesture sensor and
modern quadcopter coupled together, the controlling technology of quadcopter will
change its shape.
Hand Gesture-Based Quadcopter 9
Acknowledgements My sincere thanks and gratitude to all the faculty members in Department
of Electronics and Communication Engineering, BVRIT, for all their timely support and valuable
suggestions during the period of the project. Thanks to my parents and friends for supporting and
encouraging me all the time.
10 R. Meena and U. Syed Abudhagir
References
1. De Soto, Clinton B (1936) Two Hundred Meters and Down, the Story of Amateur Radio. Hart-
ford, ARRL
2. Achtelik M, Zhang T, Kuhnlenz K, Buss M (2009) Visual tracking and control of a quadcopter
using a stereo camera system and inertial sensors. In: Proceedings of the 2009 IEEE conference
on mechatronics and automation
3. Chilmonczykl M (2014) Kinect control of a quadrotor UAV. In: Spring Haptics Class Project
Paper presented at the University of South Florida, 30 April 2014
4. Vishwas S, Arora R, Bhaskar N, Lal SK, Bhandwal M (2015) Gesture controlled quadcopter.
Int J Environ Rehabil Conserv VI(2015):122–125
5. (Florence) Ng WS, Sharlin E (2011) Collocated interaction with flying robots. In: 20th IEEE
international symposium on robot and human interactive communication, 31 July–3 August
2011, Atlanta, GA, USA
6. http://hackaday.com/controlling-a-quadcopter-with-gestures
Robotic Flexible Artificial Finger Design
Using Nanosized DC Motors and Gears
for Finger Injuries
Abstract This paper deals with a typical study and analysis of finger injuries. The
proposed study investigates and proposes an artificial finger based on sensitivity
studies which would be technically feasible with nanosized gears and motors. The
studies are based on sensitivity analysis done on a test bed of a DC servo motor
scaled down in a miniature fashion based on a development of a position index of the
matrix evaluation. The analysis data is fed to a novel fuzzy preview controller which
would anticipate the condition of the finger as ill-conditioned which will settle the
final position of the finger. This entire logic would involve the fabrication of such
a device with nanotechnology as the lookup table can be realized using a nanochip.
The motion of the finger along the three axes is dependent on how the sensor value
interprets the value and the motor causes the motion based on the measured values
of the current and voltages which are interpreted by the closed-loop control scheme.
1 Introduction
This paper deals with studies on typical scenarios of accidents where people have
problems with the motion of fingers of their hands. Exhaustive research is happen-
ing wherein finger exoskeleton models, finger torque sensing models, and kinematic
models have been prototyped and modeled; some examples [1–4]. Permanent mag-
net DC motor has been used by various researchers worldwide in interdisciplinary
platforms in windshield wipers, washers, blowers used in heaters and air condi-
tioners, and in electric toothbrushes. As millions of automobiles are manufactured,
PMDC motors play a crucial role. The rotor of PMDC motor consists of armature
core, armature windings, and commutator. The brushes are similar to conventional
DC motors. Here, the proposed method uses DC motor for the control of the finger
position. The dynamics of such a system is compared to the motion of a doubly
inverted pendulum motion stabilization which has been aggressively studied by var-
ious researchers in the control domain and also the basic model was studied by
Kavirayani [5]. The sensitivity studies of such a system can be scaled down and
applied using nanotechnology and studies on flexible link manipulators which is the
focus area. Jiang et al. [8] elaborated on the vision-based tactile sensor has been
implemented to build artificial fingers capable of grasping for the hand prosthesis.
Chappell et al. [9] on similar grounds developed an algorithm based on the standard
deviation (SD) of signal data from the piezoelectric sensor mounted on an artificial
finger tip. Mohammad Harif et al. [10] highlighted the potential of using prosthetic
devices to sense surface textures. Yuji et al. [11] discuss tactile sensor for multi-
functional sensing devices to detect the normal contact force and temperature. Araki
et al. [12] conducted real-time experiments for the development of prosthetic hand
system based on joint angle estimation. Authors [13–20] have similarly elaborated
on various possible variations to the design with gear mechanisms and design of
the artificial fingers and prosthetic hands. Ariyanto et al. [21] have discussed finger
pattern movement using neural networks and Tokuda et al. [22] have developed the
model of the simulator based on a power pulse operation which is used as a basis
for extension and analysis in this paper. Elliot et al. [24] has discussed the practical
method of dealing with hand injuries and surgeries involved.
This paper proposes a novel idea of developing and analyzing an artificial finger
design with the help of a redefined matrix which indicates the position of the finger
and also analyzes the position using a novel fuzzy preview controller which indicates
the position index as well conditioned or ill conditioned.
Robotic Flexible Artificial Finger Design Using … 13
2 Research Methodology
Koganezawa et al. [7] have developed a miniature model for the artificial finger as
shown in Fig. 1. However, the mathematical realization that is done in this plant is
based on Fig. 2 which is a scaled down realization of the dynamic system proposed.
The proposed method investigates the feasibility of sensitivity in control domain and
the aspects of developing the plant dynamics. It is important to realize that critical
signal delays play an important role in the motion control of the joints (Upper and
Lower). Integrating the critical delays into system matrices changes the sensitivity
analysis. The plant model for the integrated model is taken from the basic Newtonian
physics leading to the Euler–Lagrangian equation as in [6] taking the kinetic and
potential energies of the joints.
d ∂T ∂T δU
− + Qi (1)
dt ∂ q̇i ∂ q̇i δqi
Taking Eq. 1 and defining in the plant model analysis has been done for sensitivity
of the plant model as shown in Fig. 1 for Flexible Link Finger Model (FLFM) taking
into consideration uncertainty added to the plant model at the controller level wherein
the controller is designed is using computational intelligence techniques such as
(Fuzzy control, Meta-Heuristic Methods such as BAT, Particle Swarms). However,
14 S. Kavirayani and U. Salma
in the initial conditions, the noise and uncertainty which do exist are assumed to
be negligible in the model proposed. The model proposed by Kognazawa et al.
is further reduced to a scale of 1:5 as shown in Fig. 3 for design flexibility. The
torsional driving mechanism proposed by Kognazawa is taken and enhanced with
a time delay constraint assuming that the signal would take considerable time in
reaching the motor based on sensed values. Equations 2 and 3 indicate the basic DC
motor model. Equations 4 and 5 indicate the state space model for the DC motor
model. The proposed variations of the model are seen in Eqs. 6 and 7 where the
time delay in the signal propagation is added. Then, it is analyzed using Eqs. 8 and
9 which indicate the position of the motor as ill conditioned or well conditioned.
Figure 4 indicates the complete structure of the nanofabrication as proposed in
[7]. However, taking the DC motor as shown in Fig. 5 and modified as per Eqs. 6
and 7, the modified plant is analyzed. For ease of representation, the variables are
defined in equations as Ra r, La L, Jm J. The values are defined in Appendix
A.
di r k 1
− i− ω+ V (2)
dt L L L
dω k B 1
i− ω− T (3)
dt J J J
Robotic Flexible Artificial Finger Design Using … 15
The block diagram realization of the motor taken in Fig. 5 is realized in Fig. 6.
Taking pade approximation for the dead times involved in the system, the system
definition gets modified as follows in Eqs. 6 and 7.
16 S. Kavirayani and U. Salma
⎡ ⎤ ⎡ ⎤
−r/L k/L 0 1/L
⎢ ⎥ ⎢ ⎥
ẊNew ⎣ k/J −B/J 0 ⎦XNEW + ⎣ 0 ⎦V (6)
0 Tdly1 Tdly2 0
Y CXNew + DV (7)
Tdly1 and Tdly2 are the time delays that are integrated into the system definition my
row and column generation for both the state matrix and input matrix. The dynamic
equations are integrated with the time delay into the system to understand the impact
of delay and variations for the time delay in finger position when controlled by a
motor. The system matrix defined in Eq. 2 becomes well conditioned in Eq. 6 with
the row and column augmentation which makes the system to anticipate the future
states with better precision when compared to the definition in Eq. 4. The analysis
of the definition of the augmentation and the improvement in the matrix definition is
based on Eqs. 8 and 9 which yield the matrix definition for well conditioned matrices
defined in the new state space equations. Si is the estimate of the addition of elements
in the matrix where aij corresponds to the element within row i and column j and
Si indicates the measured value. And, Kmeas is the value that indicates whether the
matrix definition has become well conditioned or ill conditioned.
1/2
Si ail2 + ai2
2
+ ....... (8)
|A|
Kmeas (9)
s1 s2 s3 . . . .sn
The measured value of Kmeas indicated whether the finger position is well con-
ditioned or ill conditioned which is fed back to the motor using a fuzzy preview
controller which corrects the position.
Robotic Flexible Artificial Finger Design Using … 17
The important parameters that are playing a key role are the error and the error rate
which represent two inputs to the fuzzy preview controller which lookups based on
drainkov [23] the values based on which a decision is made and the output is passed
on to the controller for amplification of the signal which is then passed as input to the
plant. The input to the fuzzy controller here is the position of the finger and the error
in the position. The preview control here would be done with the help of granular
fuzzy computing by taking the derivative of the errors which makes the functionality
of the controller faster and also the fuzzy preview controller can be realized using a
nanochip that is memoryless. The feedback path has an LQR controller which does
the state feedback control. Thus, the fuzzy preview control takes the preview of the
current and voltages which are the states that have to be estimated for variations in
future and the approximation will yield the functionality of better approximation.
Figure 7 indicates the MATLAB realization of the plant.
3 Results
As indicated by Elliot et al. [24], the procedure followed in handling hand injuries is
varied worldwide. The realization of the position of the finger is taken based on the
input coordinates required indicated by x, y, and z as shown in Fig. 8. The motor would
involve moving to the three maximum positions 1, 2, and 3. Positions 1, 2, and 3 are
considered as initial and extreme ends in between which the finger normally moves
based on the coordinates. The coordinates indicated by the control system will make
the plant act as per the preview control data. Table 1 indicates the possible values that
can be sensed from the output of the fuzzy preview control chip realization which
will indicate the position of the artificial finger as well conditioned or ill-conditioned.
As indicated in Table 1, finger positions 2 and 3 are well-conditioned states as the
position index is low. All through the analysis, the delay considered is 0.02 s.
Figure 9 indicates the initial posture without delay and the delay is incorporated
into the matrix definition at the input. Figure 10 indicates a possible motor connection
and a sensor design for the preview controller. As shown in Figs. 11 and 12, the motor
current and angular velocity variation can be observed.
18 S. Kavirayani and U. Salma
Fig. 7 Novel fuzzy preview controller with preview control (MATLAB Realization)
Fig. 11 Permanent magnet DC motor armature current with transport delay of 0.002 s
4 Conclusions
The major conclusions that can be drawn from the proposed work are as follows:
the realization of the miniature version of the scaled down drive mechanism for
the artificial finger has to be done. The motor functionality is to be integrated with
possible delays into the system definition. The novel fuzzy preview controller has to
be realized as a memoryless nanochip controller which is to be used for indicating
the position of the finger. The voltages and currents for a simulated model indicate
the possibility and feasibility of the design model. First-line surgical options and
surgeons’ preferences vary globally and the design of efficient systems requires a lot
of understanding of the operations of hand movements and gesture developments.
References
1. Hussain R, Shahid MA, Khan JA, Tiwana MI, Iqbal J, Rashid N (2015) Development of a low-
cost anthropomorphic manipulator for commercial usage. In: 2015 international symposium
on innovations in intelligent systems and applications (INISTA), Madrid. pp 1–6
2. Ertas IH, Hocaoglu E, Barkana DE, Patoglu V (2009) Finger exoskeleton for treatment of
tendon injuries In: IEEE international conference on rehabilitation robotics, Kyoto international
conference center. pp 194–201
Robotic Flexible Artificial Finger Design Using … 21
3. Stienen AHA, Moulton TS, Miller LC, Dewald JPA (2011) Wrist and finger torque sensor
for the quantification of upper limb motor impairments following brain injury. In: 2011 IEEE
international conference on rehabilitation robotics, Zurich. pp 1–5
4. Chang CW, Kuo LC, Cheng YT, Su FC, Jou IM, Sun YN (2007) Reliable model-based kine-
matics analysis system for articulated fingers. In: 2007 29th annual international conference
of the IEEE engineering in medicine and biology society, Lyon. pp 4675–4678
5. Kavirayani S (2005) Classical and neural net control and identification of non-linear systems
with application to the two-joint inverted pendulum control problem. University of Missouri-
Columbia, Diss
6. Kavirayani S, Gundavarapu N (2016) Naturally inspired firefly controller for stabilization of
double inverted pendulum, De Gruyter. Technol Eng 12(2):14–17. ISSN (Online) 1336–5967
7. Koganezawa K, Ishizuka Y (2007) Novel mechanism of artificial finger. In: 2007 IEEE/ASME
international conference on advanced intelligent mechatronics, Zurich. pp 1–6. https://doi.or
g/10.1109/aim.2007.4412520
8. Jiang H, Zhu X, Xie W, Guo F, Zhang C, Wang Z (2016) Vision-based tactile sensor using
depth from defocus for artificial finger in hand prosthesis. Electron Lett 52(20):1665–1667
9. Chappell PH, Muridan N, Hanif NHHM, Cranny A, White NM (2015) Sensing texture using
an artificial finger and a data analysis based on the standard deviation. IET Sci Meas Technol
9(8):998–1006
10. Mohamad Hanif NHH, Chappell PH, Cranny A, White NM (2015) Surface texture detection
with artificial fingers. In: 2015 37th annual international conference of the IEEE engineering
in medicine and biology society (EMBC), Milan. pp 8018–8021
11. Yuji Ji, Shiraki S (2013) Magnetic tactile sensing method with Hall element for artificial finger.
In: Seventh international conference on sensing technology (ICST), Wellington. pp 311–315.
https://doi.org/10.1109/icsenst.2013.6727665
12. Araki N, Inaya K, Konishi Y, Mabuchi K (2012) An artificial finger robot motion control based
on finger joint angle estimation from EMG signals for a robot prosthetic hand system. In: The
2012 international conference on advanced mechatronic systems, Tokyo. pp 109–111
13. Niikura R, Kunugi N, Koganezawa K (2011) Development of artificial finger using the dou-
ble planetary gear system. In: IEEE/ASME international conference on advanced intelligent
mechatronics (AIM), Budapest. pp 481–486
14. Khodayari A, Talari M, Kheirikhah MM (2011) Fuzzy PID controller design for artificial finger
based SMA actuators. In: 2011 IEEE international conference on fuzzy systems (FUZZ-IEEE
2011), Taipei. pp 727–732. https://doi.org/10.1109/fuzzy.2011.6007542
15. Xu Z, Todorov E, Dellon B, Matsuoka Y (2011) Design and analysis of an artificial finger joint
for anthropomorphic robotic hands. In: 2011 IEEE international conference on robotics and
automation, Shanghai pp 5096–5102. https://doi.org/10.1109/icra.2011.5979860
16. Kheirikhah MM, Khodayari A, Tatlari M (2010) Design a new model for artificial finger by
using SMA actuators. In: 2010 IEEE international conference on robotics and biomimetics,
Tianjin. pp 1590–1595. https://doi.org/10.1109/robio.2010.5723567
17. Koganezawa K, Ishizuka Y (2008) Novel mechanism of artificial finger using double planetary
gear system. In: 2008 IEEE/RSJ international conference on intelligent robots and systems,
nice. pp 3184–3191. https://doi.org/10.1109/iros.2008.4650589
18. Fujimoto I, Yamada Y, Morizono T, Umetani Y, Maeno T (2003) Development of artificial
finger skin to detect incipient slip for realization of static friction sensation. In: Proceedings
of IEEE international conference on multisensor fusion and integration for intelligent systems,
MFI2003. pp 15–20. https://doi.org/10.1109/mfi-2003.2003.1232571
19. Yamada D, Maeno T, Yamada Y (2001) Artificial finger skin having ridges and distributed tactile
sensors used for grasp force control. In: Proceedings 2001 IEEE/RSJ international conference
on intelligent robots and systems. Expanding the societal role of robotics in the next Millennium
(Cat. No.01CH37180), Maui, HI, vol 2. pp 686–691. https://doi.org/10.1109/iros.2001.976249
20. Larimi SR, Nejad HR, Hoorfar M, Najjaran H (2016) Control of artificial human finger using
the wearable device and adaptive network-based fuzzy inference system. In: 2016 IEEE inter-
national conference on systems, man, and cybernetics (SMC), Budapest. pp 003754–003758.
https://doi.org/10.1109/smc.2016.7844818
22 S. Kavirayani and U. Salma
21. Ariyanto M et al (2015) Finger movement pattern recognition method using artificial neu-
ral network based on electromyography (EMG) sensor. In: 2015 international conference on
automation, cognitive science, optics, micro electro-mechanical system, and information tech-
nology (ICACOMIT), Bandung. pp 12–17. https://doi.org/10.1109/icacomit.2015.7440146
22. https://doi.org/10.1109/icacomit.2015.7440146
23. Tokuda T et al (2008) Multi-finger structure and pulsed-powering operation scheme for CMOS
LSI-based flexible stimulator for retinal prosthesis. In: 2008 30th annual international confer-
ence of the IEEE engineering in medicine and biology society, Vancouver, BC. pp 4212–4215.
https://doi.org/10.1109/iembs.2008.4650138
24. Drainkov HH, M Reinfrank, An introduction to fuzzy control. Narosa publications (1996)
25. Elliot D et al (2014) Repair and reconstruction of thumb and finger tip injuries: a global view.
Clinics Plastic Surg Elsevier 41(3):325–359. https://doi.org/10.1016/J.CPS.2014.04.004
A Novel and Efficient Design
for Squaring Units by Quantum-Dot
Cellular Automata
Abstract Quantum cell automata (QCA) are the best possible alternative to the
conventional CMOS technology due to its low power consumption, less area and
high-speed operation. This paper describes synthesizable QCA implementation of
squaring. Vedic sutras used for squaring are defined over algorithm construction.
Based on the concept of the Vedic sutra, this paper has carried out 2-bit square and
4-bit square, projective to affine logic gates construction. Importantly for minia-
turization of devices, the QCA based square is the operation on which the area of
circuits relies on. This means that significantly lower QCA parameters can be used in
the square than in other competitive square circuits such as Wallace, Dadda, serial-
parallel, and Baugh-Wooley.
1 Introduction
Squaring operation is widely used and has important roles in logic computation
such as signal processing and microprocessor. This is because most of the logic
processing algorithms can be divided down to the level of the square, which is the
basic operation. Many researchers have designed square circuits, which are based on
CMOS technology. But when dimensions of the MOS transistors are minimized to
a nanometer, the design expresses two important problems (i) tunneling effect take
place, resulting in a change in the functionality of the design (ii) due to the effects
of wire resistance and capacitance, the interconnections do not scale automatically
[1]. There are two alternative approaches for solving the above-discussed problems
of CMOS technology through (i) new transistor-based devices such as the tunnel
FET (T-FET), single electron transistor (SET), and carbon nanotube FET (CNT-
FET) (ii) other alternatives to transistor-based devices [2]. The first alternative is
suitable for implementation of the single computation unit, but the integration of
several computational blocks still remains a challenge [2]. Considering the second
alternative, quantum cellular automata (QCA) is the new and efficient technology at
the nanoscale level. QCA circuit was reviewed in [3] to construct classical cellular
automata with the help of quantum dots (q-dot) and to differentiate to the name from
the models of cellular automata performing digital computation, it is named as QCA.
QCA is the promising technology which offers high density, low power with high
performance for digital circuits. Unlike CMOS technology, QCA has no physical
transportation charge as Columbic force is the sole reason for Interaction between the
QCA cells. Thus, QCA emerges as the possible alternative to the CMOS technology.
The primary advantage of the QCA is that it can represent a data bit occupying
an only small area as QCA cell has two electrons which having polarizations (P
+ 1 and P –1) for representation of logic “0” and logic “1”. In CMOS transistor
technology where the base layer is treated as an active layer, whereas in QCA all
layers can be utilized as an active on which design can be constructed.
In digital design, the most important computational units are binary addition and
multiplication. Squaring is also one of the most important operations in different
cryptographic algorithms and high-performance computing. Normally the calcula-
tion of Binary Square of a number is made using multiplier; however, many dedicated
squaring techniques are also presented in the literature [4, 5]. Vedic science is mainly
associated with various Vedic sutra (or aphorisms) deals with various applications
such as a fast multiplier, and other arithmetic operation. The importance of Vedic
mathematics remains on the fact that it reduces the large calculations in conventional
mathematics to a very simple one [6, 7].
The paper is outlined as follows: Sections 2 and 3 discussed the QCA computing
paradigm and existing work. In Sect. 4, the square circuit is constructed and expla-
nation. QCA design, simulation outcomes, and parameter comparison are discussed
in Sect. 5. Finally, the conclusion is presented in Sect. 6
2 Preliminaries
Figure 1a shows QCA cell and two different polarization P –1 and P + 1. Figure 1c
and d shows the three inputs majority and inverter gate. Clocking in QCA design has
a very important impact on every QCA logic design. Clocking is utilized in QCA
not only controls data transition but also provides the power supply to the circuit [2].
A Novel and Efficient Design for Squaring Units … 25
(c) (d)
Inverter Majority voter
Input
Output
Fig. 1 QCA design a Cell polarization –1 and 1, b wires, c inverter, d three inputs (A, B, C)
majority gate
The mostly used clocking scheme is the 4 phase clock, here cell is clocked using a 4
phase clock and the phase shift is 90° from the previous clocking to next clocking.
The four phases of the clocking are used for data flow. In case of switch phase, cells
are unpolarized and having low potential barriers and the barriers are raised. The
barriers are kept high in hold phase and it is lowered in release phase. The barriers
remain lowered in relax phase, which allows the cells to be an unpolarized state. The
logic transition of data occurs during the switch phase.
There are many previous state-of-the-art designs have presented using Vedic tech-
niques considering various platforms, i.e., microprocessors, FPGA [8–14]. In [8]
Vedic multiplier is implemented in 8085, and 8086 microprocessors comparing with
the conventional multiplier. As per study in [9], multiplier architecture uses the
crosswise and vertical algorithm “Urdhva Tiryagbhyam” of Vedic mathematics. This
design has improved in speed as compared to fast Booth multiplier by implement-
ing on FPGA. In other work [10], multiplier architecture uses “Nikhilam Sutra” of
Vedic mathematics. This multiplier architecture finds out the complement of the large
number from its nearest base to perform the multiplication operation. Therefore, the
multiplication of two large numbers is reduced to the multiplication of their com-
plements and addition. Existing work [10] is again extended in [11], adding carry
save adder to the Vedic multiplier architecture which reduces propagation delay
significantly. Both Vedic multipliers [10, 11] are synthesized and simulated using
Xilinx ISE 10.1 software and also implemented on FPGA devices. In FPGA plat-
26 B. K. Bhoi et al.
form, d squaring architectures using Vedic mathematics are also proposed in [12,
13]. Kasliwal et al. [12] have presented squaring units using concurrent operation
of the multiplier (Vedic multiplier) and the addition in VHDL based. Authors have
compared the result with the conventional Booth’s algorithms in terms of time delay
and area considering Xilinx Vertex 4vlx15sf36-12 device. The previous study in [13]
shows the logical implementation of squaring architecture using Vedic sutra of Vedic
science targeting hardware model such as FPGA.
The usage of Vedic mathematics in QCA platform has not been reported previously
so this motivates us to take the advantage of Vedic mathematics to QCA platform. This
paper presents the multiplier less squaring design in QCA platform using Yavadunam
Sutra of Vedic science. The design of the proposed architecture is derived from
ancient Indian Vedic mathematics [6, 7]. The meaning of the “Yavadunam” Sutra
algorithm is “whatever the deficiency subtracts that deficit from the number and
write alongside the Square of that deficit”. The Yavadunam algorithm associated the
Square of the large size of the operand into the Square of fewer magnitude operands
with addition operation [14].
Table 1 presents the algorithm for squaring of a binary operand reported in [13] con-
sidering FPGA platform. A certain range of deficit that avoids extra binary addition
operation and leads to a reduced bit multiplication supports case 1 of step 4 in the
algorithm. In the new architecture of squaring techniques, the benefit of Vedic sci-
ence remains unaffected where complex calculations are simplified to a very simple
one. Here, the proposed algorithm is used for designing of the 4-bit squaring unit in
QCA platform.
According to the proposed algorithm described above, the design of 4-bit square
unit requires a 2-bit square unit, a complement unit, and a left shifter. The proposed
design is again simplified using dedicated (i) 2-bit square unit (ii) 2s complement
unit. Table 2 shows squaring operation and 2s complement operation respectively
for the 2-bit binary operand. For square operation (2 bits), the Boolean output P is
expressed as: p3 a1 (and) a0 , p2 a1 and (not (a0 )), p1 0, p0 a0.
Similarly, for 2s complement operation (2 bits), the Boolean output B is expressed
as b1 a1 (xor) a0 , b0 a0 . The proposed two-bit square requires only two “AND”
and one Inverter resulting very few numbers of cell and area for overall design.
Figures 2a and 3a show the circuit diagrams for the proposed 2-bits square design
and 4-bits square design, respectively. The layout of the new designs is shown in
Figs. 2b and 3b.
Example 1 presents the square operation of the 4-bit operand (1110) evaluating
in four steps of operation. In step 1, the 2s complement of the lower half bits of input
operand is obtained using equations (V, VI). The 2-bit result of step I is squared
in step II using equations (I–IV) as RPR of the final result. In step III, the LPR is
A Novel and Efficient Design for Squaring Units … 27
obtained by one bit left shift operation of input operand. The final outcomes of the
LPR and RPR give the final square result.
Example1: Squaring algorithm
Step-I
For a 4-bit number a 14 (“1110”)
2s complement operation of lower 2 bits (a1 a0 “10”), the Boolean output B is
expressed as: b1 a1 (xor) a0 “1”, b0 a0 “0”
Step-II
For square operation of b1 b0 “10” (2 bits), the Boolean output P is expressed as:
p3 b1 (and) b0 0, p2 b1 and (not (b0 )) 1, p1 0, p0 b0 0
So RPR (Right part of result) p3 p1 p2 p0 “0100”
28 B. K. Bhoi et al.
(b)
(a)
a1
p2
a0
p3
p0
0 p1
(b)
(a)
a2 0 a 0 a1
p6
p2
p3
p 7 p 4 p5
p0
0 p1
Step-III
One bit left shift operation to the 4bit number (a3 a2 a1 a0 “1110”)
p7 p6 p5 p4 = (“1100”) as LPR (left part of result)
Step-IV
The concentration of LPR and RPR gives square result as:
LPR & RPR “1100 0100” (196)10.
A Novel and Efficient Design for Squaring Units … 29
The proposed squaring layouts are simulated and characterized using QCA tool
[15–17]. Figure 4 shows the simulation outcomes of the 2-bit square design [18–20].
The design parameters such as a number of clocks, area and cell count of the proposed
designs are compared (in Table 3) with previously reported state-of-the-art techniques
considering the QCA technology.
30 B. K. Bhoi et al.
6 Conclusions
Novel square units have been proposed in this paper based on a Vedic sutra, which is
constructed by QCA paradigm. Therefore, miniaturization of proposed circuits has
been to limit its area. The extensive algorithm has been presented to examine the
logic computation of n-bit square architecture. The benefits of the proposed design
of the 2-bit and 4-bit squaring unit are low complex architecture, low latency, and
minimum footprint area. Due to their primitive results for square circuitry in QCA
paradigm, they are used in recent nanoelectronics application. They can be utilized
for signal processing application such as a microprocessor and emerging computing
device.
References
1. Porod W (1997) Quantum-dot devices and quantum-dot cellular automata. J Frank Inst
334(5–6):1147–1175
2. Sridharan K, Pudi V (2015) Design of basic digital circuits in QCA. In: Design of arithmetic cir-
cuits in quantum dot cellular automata nanotechnology. Studies in computational intelligence,
vol 599. Springer, Cham (2015)
3. Lent CS, Tougaw PD, Porod W, Bernstein GH (1993) Quantum cellular automata. Nanotech-
nology. 4:49–57
4. Paar C, Fleischmann P, Soria-Rodriguez P (1999) Fast arithmetic for public-key algorithms in
Galois fields with composite exponents. IEEE Trans Comput 48(10):1025–1034
5. Sethi K, Panda R (2012) An improved squaring circuit for binary numbers. International J Adv
Comput Sci Appl 3(2):111–116
6. Mishra NK, Wairya S (2013) Low Power 32 × 32 bit multiplier architecture based on Vedic
mathematics using virtex 7 low power device. Int J Res Rev Eng Sci Technol. 2(2)
7. Thapliyal H, Kotiyal S, Srinivas MB (2005) Design and analysis of a novel parallel square and
cube architecture based on ancient Indian Vedic mathematics. In:48th Midwest Symposium on
IEEE Circuits and systems, pp 1462–1465
8. Chidgupkar PD, Karad MT (2004) The implementation of Vedic algorithms in digital signal
processing. Glob J Eng Educ 8(2):153–158
A Novel and Efficient Design for Squaring Units … 31
9. Pradhan M, Panda R (2010) Design and Implementation of Vedic Multiplier. AMSE J Comput
Sci Stat Fr 15:1–19
10. Pradhan M, Panda R (2012) Speed optimization of Vedic multiplier. AMSE J Gen Math
49:21–35
11. Pradhan M, Panda R (2013) High speed multiplier using Nikhilam Sutra algorithm of Vedic
mathematics. Int J Electron 101(3):300–307 (2014). https://doi.org/1080/00207217.2013.78
0298
12. Kasliwal PS, Patil BP, Gautam DK (2011) Performance evaluation of squaring operation by
Vedic mathematics. IETE J Res 57(1):39–41
13. Barik RK, Pradhan M (2015) Area-time efficient square architecture. Adv Model Anal D
20(1):21–34
14. Pushpangadan R, Sukumaran V, Innocent R, Sasikumar D, Sundar V (2009) High speed vedic
multiplier for digital signal processors. IETE J Res 55(6):282–286
15. Vankamamidi V, Ottavi M, Lombardi F (2008) Two-dimensional schemes for clocking/timing
of QCA circuits. IEEE Trans Comput Aided Des Integr Circuits Syst 27(1):34–44
16. Misra NK, Wairya S, Sen B (2017) Design of conservative, reversible sequential logic for cost
efficient emerging nano circuits with enhanced testability. Ain Shams Eng J
17. Walus K, Dysart TJ, Jullien GA, Budiman RA (2004) QCADesigner: A rapid design and
simulation tool for quantum-dot cellular automata. IEEE Trans Nanotechnol 3(1):26–31
18. Walus K, Jullien G, Dimitrov V (2003) Computer aritmetic structures for quantum cellular
automata. In: Record of Thirty-seventh Asilomar Conference on Signals, System and Comput-
ers, pp 1435–1439
19. Cho H, Swartzlander EE Jr (2009) Adder and multiplier design in quantum-dot cellular
automata. IEEE Trans Comput 58(6):721–727
20. Kim SW, Swartzlander EE (2009) Parallel multipliers for quantum-dot cellular automata. In:
Nanotechnology Materials and Devices Conference, 2009. NMDC’09. IEEE, pp 68–72
Optimal Forwarding in Named Data
Networking
1 Introduction
content, and data packet as a result of any producer holding that data. The producer
is not necessarily an end host. Data packet is delivered on reverse path of interest
packet. The packet delivery and forwarding is based on names and not by addresses.
An NDN interest packet is a combination of three basic identification parameters
such as content name, name selector, and a nonce (a random number of 32-bits
to identify unique interest). The corresponding data packet contains the requested
data and can be served by any intermediate router or end host. Every data packet is
encrypted by a signature along with publisher id. This makes NDN communication
inherently secure.
Each NDN router maintains three data structures used for dynamic forwarding and
in-network caching: Pending Interest Table (PIT), Content Store for caching (CS),
and Forwarding Information Base (FIB). Forwarded data packets from upstream are
cached in the content store of intermediate routers for serving the future Interest
requests to the same named data. The content store has only two entries to store—in-
terest packet name and nonce. The PIT maintains name prefix and the number of the
incoming interfaces of every interest packet. The PIT holds the entry till the data is
received for corresponding Interest. For every data packet, a copy is stored in CS
for next interest requests for same data. The FIB in NDN maintains stateful routing
information for every name prefix to the next hop forwarding interfaces. For every
interface, the index is maintained which holds pointers for every entry in CS, PIT,
and FIB. The duplication of packets is avoided by use of a nonce in every interest
packet [1, 2].
Inherent caching in NDN’s routers enables multicast data delivery. As the FIB
maintains the state of every record, the intermediate routers in NDN thereby allow
adaptive forwarding. For every interest and data packet delivery, NDN router can
measure network performance attributes such as throughput, congestion, RTT, etc.
This information is crucial in deciding alternative paths for forwarding. The basic
NDN architecture does not define forwarding plane rather it is left for implementation
[3, 4]. Our goal of this paper is to propose optimal forwarding strategy based on the
ranking of interfaces on the basis of ordered weighted averaging operator.
Forwarding in NDN is decision process at the router. The FIB matches the longest
prefix of name record to decide the interface for forwarding interest packet. For-
warding is best effort data retrieval as data can be found in cache store of upstream
routers. If a consumer requesting desired data is not received, it may retransmit inter-
est packet. Identification of loss of a packet is the responsibility of the consumer. For
every Interest request, an RTT is measured and retransmission occurs only if RTT is
not expired.
FIB maintains the record of interfaces on which interest is received. This informa-
tion then is used to map received data against sent interest. Name prefix and nonce
pair is used to uniquely identify incoming interest thereby avoiding duplication.
Optimal Forwarding in Named Data Networking 35
Therefore, Interest packets cannot loop. This mechanism allows routers to freely
retry multiple alternative paths in Interest forwarding [2, 4]. Forwarding process in
NDN is shown in Fig. 1.
In this paper we propose, optimal forwarding based on Multi-attribute decision-
making method at each interface in an NDN router. For every entry in FIB, values
of all the attributes are recorded in a matrix per time interval. The weight vector
assigns weight to each attribute. Then maximizing deviation method calculates the
total deviation among all the interfaces with respect to attribute. Finally, objective
function ranks the alternatives according to weights of each alternative. The strategy
layer then forwards the interest to the highest ranked interface. The objective weight
assignment to every attribute is defined over a time interval for better load balancing.
The main contributions of this paper are outlined as follows:
• The paper introduces the concept of maximizing deviation-based optimal forward-
ing strategy based on multi-attribute decision-making with OWA operator.
• Good extensibility of our approach to decision-making as it is based on real-time
attribute values as observed by the router in given interval.
• For any kind of application scenarios, optimal forwarding is resilient as any appli-
cation parameter is easily configurable.
• Evaluation of optimal forwarding strategy against default methods such as
BestRoute and Flooding and recent strategies such as EPF clearly demonstrates
effectiveness and scalability in terms of dynamic network changes and load bal-
ance.
The rest of this paper is organized as follows: Sect. 2 presents related work of
designing forwarding strategy, Sect. 3 introduces the background of OWA and our
methodology based on MADM, performance analysis with ndnSIM presented in
Sect. 4. We conclude our work in Sect. 5.
36 G. Pakle and R. Manthalkar
2 Related Works
3 Methodology
This section deals with modeling and description MDOF strategy based on multi-
attribute decision-making using OWA operator for interest forwarding in NDN. First,
we describe the important attributes of interfaces available for decision-making. Sub-
sequently, we define the procedure to calculate weights and optimal solution. The
MDOF process provides the ranking of interfaces according to highest to lowest
weights. The optimal solution determines the maximum deviation of an alternative
with respect to an attribute. This deviation is then used to maximize Interest satis-
faction ratio for higher throughput.
In order to forward interest, a node needs to select the optimal interface from
available alternatives for every name prefix defined in FIB according to a criterion.
As NDN has stateful forwarding plane, it maintains the status of each attribute in
strategy layer. Low rtt and low hop count attributes are of benefit type, and attributes
such as delay and throughput are cost type [8]. Table 1 shows the overall attributes,
type, and dimension of measurement. For an objective assessment of attribute values,
measurement time interval is required. After every time interval, the strategy layer
calculates the attribute values and their weights according to the given procedure.
While designing interface selection criteria, the attribute weights of each interface
are unknown. The weights are calculated with weight vector defined over the range
of attributes for every interface.
n
F(O) wjaj (1)
j1
then the function F(O) is called ordered weighted averaging operator where aj is the
jth largest of a collection of arguments defined in βi (i 1, 2, . . . n) which implies
that the arguments aj are arranged in descending order. a1 > a2 > a3 > · · · > an .
wj is the weighting vector associated with the function F(O) and w j ≥ 0, j
1, 2, . . . n, nj1 w j 1.
The weight wi is associated with particular argument and specify the weight of
the argument at position i.
n
v 2j 1 (4)
j1
m
Z i (v) Pi j v j (5)
j1
n
Di j (v) Pi j v j − Pk j v j (6)
k1
The total deviation among all interfaces related to a specific attribute aj is given as
n
n
n
D j (v) Di j (v) Pi j − Pk j v j (7)
i1 i1 k1
To obtain the weight vector v to maximize total deviation among all interfaces with
respect to all attribute values. The objective function is given as
m
m
n
n
max D(v) D j (v) Pi j − Pk j v j (8)
i1 j1 i1 k1
m
m
n
n
max D(v) D j (v) Pi j − Pk j v j (9)
i1 j1 i1 k1
Such that
m
v j ≥ 0, j 1, 2, . . . m, v 2j 1
j1
Normalizing ν ∗j ,
v ∗j
v j m , j 1, 2, . . . m (13)
j1 v ∗j
Based on the analysis of the maximizing deviation-based method for interface selec-
tion is given in Algorithm 1.
Optimal Forwarding in Named Data Networking 41
4 Experimentation
In this section, we evaluate the performance of optimal forwarding (in terms of drop
rate, throughput, Interest Satisfaction Ratio, etc.) by comparing our work to standard
forwarding algorithms available in ndnSIM simulator [9]. Algorithms selected for
comparison are BestRoute, Flooding, and EPF. The objective of the evaluation is to
verify the effectiveness of our strategy under realistic network scenarios.
prefix such as NDN:/server/chunk/id/number. Simulation is run for 25, 50, 100 s for
observing the performance in stable flows.
The client node requests data at a constant rate of 1000 packets per seconds. As shown
in Fig. 3a, MDOF outperforms BestRoute and EPF strategy in terms of throughput.
BestRoute try to use all the paths available for sending interest whereas MDOF
and EPF strategy chooses best-ranked interface for sending available interests. This
causes MDOF to have fast convergence to high utilization of available route.
EPF, as based on Entropy calculation for each attribute, converges slowly to high
utilization. In our scenario, EPF utilization converges to a higher level as MDOF
after few seconds of simulation stabilization. Figure 3b depicts the overall packet
drop rate for interest sending frequency of 1000 packets per second. Here too, MDOF
has the lowest packet drop rate as compared with other strategies. Only EPF shows
nearly the same levels of packet drop rate as MDOF. When a client is requesting
content at high speed, in BestRoute and Flooding, the links are not fully utilized
5 Conclusion
References
1. Jacobson V, Smetters DK, Thornton JD, Plass MF, Briggs NH, Braynard RL (2009) Networking
named content. In: Proceedings of ACM CoNEXT
2. Yi C, Afanasyev A, Wang L, Zhang B, Zhang L (2012) Adaptive forwarding in named data
networking. ACM SIGCOMM Comput Commun Rev
3. Zhang L, Afanasyev A, Burke J, Jacobson V, Crowley P, Papadopoulos C, Wang L, Zhang B
(2014) Named data networking. ACM SIGCOMM Comput Commun Rev 66–73
4. Zhang L, Estrin D, Burke J, Jacobson V, Thornton JD, Smetters DK, Zhang B, Tsudik G, Massey
D, Papadopoulos C, Abdelzaher T (2010) Named data networking (ndn) project. RelatórioTéc-
nico NDN-0001, Xerox Palo Alto Research Center-PARC
5. Lei K, Wang J, Yuan J (2015) An entropy-based probabilistic forwarding strategy in named data
networking. In: IEEE international conference on communications (ICC), pp 5665–5671
6. Gong L, Wang J, Zhang X, Lei K (2016) Intelligent forwarding strategy based on online machine
learning in named data networking. In: IEEE Trustcom/BigDataSE/I SPA, pp 1288–1294
7. Su J, Tan X, Zhao Z, Yan P (2016) MDP-based forwarding in named data networking. In: 35th
Chinese control conference (CCC), pp 2459–2464
8. Xu Z (2015) Uncertain multi-attribute decision making: methods and applications. Springer
9. Afanasyev A, Moiseenko I, Zhang L (2012) ndnSIM: NDN simulator for NS-3. University of
California, Los Angeles, Technical Report
A Spatial and Spectral Feature Based
Approach for Classification of Crops
Using Techniques Based on GLCM
and SVM
Abstract This paper highlights the study regarding the classification of crop types
using the techniques based on Gray Level Co-occurrence Matrix (GLCM) and sup-
port vector machine (SVM). The dataset used was from IRS-LISS IV sensor with
5.8 m spatial resolution having three spectral bands of date 4-October 2014 for our
chosen location at 20°07 13.5 N 75°23 05.3 E. Classification of all three bands fol-
lowed by classification of GLCM measures (of all three bands) was accomplished
by using Support Vector Machine classifier with Radial Basis Function. The accu-
racy of classification obtained from GLCM was 90.29% with the Kappa coefficient
0.88 whereas the corresponding values obtained from three band classification were
86.04% and 0.83, indicating the superiority of the GLCM-based approach.
1 Introduction
Study area selected for this work covers mostly crops which is portion of Kachori,
Pimpalgaon, Pal, Walan, and Wanegaon Villages in Aurangabad District of Maha-
rashtra State in India. These villages are located at 20°07 13.5 N 75°23 05.3 E and
about 35 km in the direction of North from Aurangabad city, 5 km from Phulambri
and 350 km from State capital of Maharashtra, and it is surrounded by Khultabad
and Kanand Taluka towards west, and Sillod Taluka towards East [4]. The study area
is given in the form of the standard false color composite in Fig. 1.
A Spatial and Spectral Feature-Based Approach … 47
IRS-LISS IV Data having 5.8 meter spatial resolutions at nadir, The LISS-IV is a
high-resolution multispectral sensor operates in three (B2, B3, and B4) bands. LISS-
IV can be functioned in either of the two modes. It has a swath of 23 km in the
multispectral mode for three bands, while in panchromatic mode, the full swath of
70 km can be covered in any one single band, and this can be chosen by ground
command [5]. This camera can be slanted in the across track direction and can be
revisited after 5 days to the same area. Data used in this work is of 4-October 2014
(Kharif season) received from National Remote Sensing Centre (NRSC, ISRO) India.
Table 1 illustrates the wavelengths of all three bands of LISS-IV data in Multispectral
mode. The acquired data Projected on 43rd North zone of UTM (Universal Transverse
Mercator) and WGS-84 datum (World Geodetic System). Ground truth data were
collected by MyGPS coordinate Android Application using Global Positional System
(GPS) enabled smartphone.
48 R. K. Dhumal et al.
2 Method Used
To do the analysis of remote sensing data, several steps are needed to be followed,
and the method adopted to carry out this work is discussed below:
2.1 Preprocessing
Data received from the NRSC were geometrically and radiometrically corrected, in
preprocessing firstly all three bands are layer stacked and created a spatial subset of
size 1825 × 1646 after this Gray Level Co-occurrence Matrix is computed.
2.2 GLCM
To do the statistical texture analysis, texture features are computed from the statistical
dissemination of observed groupings of DN values at quantified locations relative to
each other in the image. According to the number of intensity points or pixels in each
grouping, statistics can be categorized into first order, second order, and higher order
statistics. The Gray Level Co-occurrence Matrix (GLCM) method is used to extract
second order statistical texture features [6–10]. Eight texture measures as given in
Table 2 were computed with processing window of 3×3 co-occurrence shift X 1
and Y 1. Resultant image for each band of LISS-IV data is given in Fig. 2, whereas
equation for each of these measures is given in Table 2.
2.3 Classification
As per the visual Image interpretation and ground truth data, 249 pixels have been
trained for seven Classes namely Cotton, Maize, Sugarcane, Settlement (Built-up
area), Rock area, and Waterbody from a subset of the image. To perform classification,
two approaches has been followed initially Support Vector Machine classifier with
RBF kernel have been applied on the original three bands of the Image and in a second
approach similar training pixels were selected from all the eight GLCM measures of
all three bands and SVM has been applied with RBF to these layers.
The SVM is the most often used important technique for supervised classifica-
tion [11] and generates good results from noisy and complex data. SVM classifiers,
characterized by self-adaptability with rapid learning speed and limited requirements
of training sample size, have confirmed a fairly reliable methodology in the smart
processing of remote sensing data [12, 13]. SVM divides the classes with decision
borderline which maximizes the margin between them. This decision boundary is
A Spatial and Spectral Feature-Based Approach … 49
called as hyper plane and the nearby data points are referred as support vectors which
are critical components of training sets. There are several options to use different
kernels for SVM namely those kernels are linear, Polynomial, Sigmoid and Radial
Basis Function (RBF) out of that RBF kernel type gives a better result than others.
Function for SVM is given in Eq. (1).
f (x) αi K (xi , x) + b (1)
i
where αi is the Lagrange Multiplier, K(xi ,x) means the Kernel function, and in this
work, it is the Radial Basis function given in Eq. (2);
2
K x, x exp −γ x − x (2)
where γ is the gamma term which is the floating point value and greater than or equal
to 0.01, and we have considered the gamma value 0.042. RBF kernel maps samples
into a higher dimensional space nonlinearly; it can deal with the circumstance when
the relation between class labels and attributes is nonlinear [11, 14, 15]. Classification
layers generated are given Fig. 3. Finally, accuracy assessment was made by using
confusion matrix wherein commission error, omission error followed by Producer
Accuracy (PA), and Users Accuracy (UA) were computed for all classes, and overall
accuracy along with Kappa coefficient was also computed for both classification
layers.
50 R. K. Dhumal et al.
Result analysis has been performed by using error matrix generated of 659 testing
pixels and computed Producer Accuracy and Users Accuracy (UA). UA is defined as
the ratio of the main diagonal cell value to the sum of the same row. As an alternative,
it can also be derived from the commission error using the following equation:
Producer Accuracy is the ratio of the main diagonal cell of a column to the sum
of the same column. It can also be obtained by subtracting omission errors from 100.
Results obtained from confusion matrix clearly state that classification of all
GLCM stacked layers gives better results than ordinary three bands. Table 3 demon-
strates the PA and UA for all seven classes for both approaches
The overall accuracy obtained for three bands is 86.04% with Kappa Coefficient of
0.83 whereas for GLCM, it was 90.29% and Kappa Coefficient 0.88. As compared to
Fallow land, Waterbody, settlement, and rock area the PA and OA are less for all three
classes of crops because of the similarity in reflectance behavior and complicated
spatial arrangements of crops. As far as the literature concern, comparison of results
with the results obtained by others is a very difficult task for RS data classification,
because of variation in datasets, study area, sample, or ground truth data. Researchers
had used first- and second-order GLCM measures for texture feature extraction, and
applied SVM for classification to classify soil, vegetation, and water body and got
good accuracy [16]. Zhang et al. had used two methods in his first method and
mean of the texture values in several directions was used as texture features of
the image whereas in the second approach, texture feature extraction based on the
direct measure and GLCM fusion algorithm, and SVM with Gaussian radial basis
function are used for classification. They performed these experiments on GeoFen-2,
QuickBird, and GeoEye-1 sensor data, which are having high spatial resolution and
they got superior results [17]. These discussed works have been applied to data with
less than 3 m spatial resolution, which is very high wherein we have used data with
5.8 m spatial resolution and got reasonably good accuracy.
3 Conclusion
In this work, the classification has been performed on all three bands as well as
the 8 GLCM measures of those three bands. As per the confusion matrix average
Producer’s and User’s accuracy for all seven classes is higher; while applying SVM
with RBF on GLCM measures, than applying it on three bands of LISS-IV data.
Results state that GLCM, while used with support vector machine, gives better results
for crop type classification. Accuracy can be enhanced by using remote sensing data
with high spatial resolution.
References
4. http://www.onefivenine.com/india/villages/Aurangabad-District/Phulambri/Kanhori
Accessed 27 Mar 2016
5. NRSA, 2003, IRS P6 Data User Manual, IRS-P6/NRSA/NDC/HB-08/03 Accessed Apr 2016
6. Albregtsen, F (2008) Statistical texture measures computed from gray level coocurrence matri-
ces. Image processing laboratory, vol 5, Department of informatics, university of oslo
7. Dhumal RK, Vibhute AD, Nagne AD, Rajendra YD, Kale KV, Mehrotra SC (2015) Advances
in classification of crops using remote sensing data. Int. J. Adv. Remote Sens. GIS 4(1):1410
8. Anys H, Bannari A, He DC, Morin D (1994) Texture analysis for the mapping of urban areas
using airborne MEIS-II images. Proc. First Int. Airborne Remote Sens. Conf. Exhib. 3:231–245
9. Hall-Beyer, M (2006) The GLCM tutorial home page. http://www.fp.ucalgary.ca/mhallbey/tu
torial.htm. Updated February 2007. Accessed Mar 2016
10. Haralick R, Shanmugan K, Dinstein I (1973) Textural features for image classification. IEEE
Trans. Syst. Man Cybern. 3(6):610–621
11. Hsu CW, Chang CC, Lin CJ (2003) A practical guide to support vector classification
12. Mountrakis G, Im J, Ogole C (2011) Support vector machines in remote sensing: A review.
ISPRS J. Photogram. Remote Sens. 66(3):247–259
13. Waske B, Benediktsson J, Sveinsson, J (2009) Classifying remote sensing data with support
vector machines and imbalanced training data. Multiple Classifier Syst. pp 375–384
14. Richards, JA, Richards, JA (1999). Remote sensing digital image analysis, vol 3. Springer,
Berlin
15. Vibhute AD, Kale KV, Dhumal RK, Mehrotra SC (2015) Soil type classification and mapping
using hyperspectral remote sensing data. In: International Conference on Man and Machine
Interfacing (MAMI), IEEE, pp. 1–4.
16. Dixit A, Hedge N, Reddy BE (2017) Texture feature based satellite image classification scheme
using SVM. Int. J. Appl. Eng. Res. 12(13):3996–4003
17. Zhang X, Cui J, Wang W, Lin C (2017) A study for texture feature extraction of high-resolution
satellite images based on a direction measure and gray level co-occurrence matrix fusion
algorithm. Sensors, 17(7), 1474 Appendix: Springer-Author Discount
Lisp Detection and Correction Based
on Feature Extraction and Random
Forest Classifier
1 Introduction
A lisp is a Functional Speech Disorder (FSD). Lisps are caused due to difficulty in
learning to make a specific speech sound. Misplacement of the tongue in the mouth
can cause the distortion of word and the syllable resulting in the lisp. Lisping problems
can commonly be identified among growing children; lisps are often temporary
and go away after certain age. Lisping is characterized by inability to pronounce
the sound of ‘s’, ‘z’, ‘r’, ‘l’, and ‘th’. People having this speech impediment face
problems when attempting to say ‘s’ and ‘z’ like ‘sh’ in ‘shoes’, ‘ch’ in ‘chair’,
‘zh’ as in ‘measure’ and ‘dg’ as in ‘badge’. Lisp have been characterized into four
types, Interdental or frontal lisp: The words containing ‘s’ and ‘z’ sound to be
as ‘th’ sounds. For example, words like ‘sleep’ pronounced as ‘thleep’, word like
‘Zoo’ are pronounced as ‘thoo’ and word like ‘buzz’ as ‘buth’. Dentalised lisp:
Dental lisp can be observed at a very young age and often fade away with age.
In the case of dental lisp, the sound produced is a slightly muffled sound. Lateral
Lisp: People having this kind of lisping problem face difficulty in pronouncing word
having /l/ and the sound made is often wet or in between spits. Palatelisp: People find
it difficult to say ‘sh’ as in ‘share’, ‘ch’ as in ‘church’. HMMs are a system modeling
process based on statistical representations of quantified symbols and/or quantities
[1]. The HMM will have in each state, corresponding to particular vector, a statistical
distribution which tells the likelihood for each observed vector. Each word will have
a different output distribution; a HMM for a speech is a string of HMMs of individual
words [1, 2]. Neural networks are an acoustic features modeling approach for spoken
word classification, phoneme classification and speech and speaker recognition [3].
Neural networks are not often reliable when it comes to continuous speech or speaker
recognition because of their inability to reproduce the temporal variations of the
acoustic features [4]. Yakoub et al. [5] have used a concatenating algorithm and
a grafting method to correct faultily uttered phonemes. Benati and Bahi [6] and
Ahcène et al. [7] discuss a method of spoken term detection based on extraction
of Mel Frequency Cepstral Coefficients (MFCC). The MFCC coefficients are a set
of acoustic vectors that are capable of representing the audio sample and what it
represents [8]. MFCC is considered as the “standard” and one of the most popular
approaches features extraction technique in speaker recognition [9].
In this paper, lisp identification from dysarthria speech by means of segmentation
in order to split the sentence into individual words, followed by MFCC feature
extraction to obtain the acoustic vectors that are unique for an audio sample, next
classification as lisp or non-lisp with the Random Forest Classifier which is highly
effective on account of it being made up of multiple decision trees and finally feature
matching to recognize and correct the lisp words if detected.
The rest of the paper is arranged as follows: Sect. 2 explains the Random Forest
Classifier application for Lisp Identification, Sect. 3 discussed the proposed proce-
dure, Sect. 4 gives the conclusion and finally Sect. 5 about results and discussion.
Lisp Detection and Correction Based on Feature Extraction … 57
3 Proposed Approach
A database has been made containing information of common lisping words and
its corresponding proper word, i.e., words having /s/ and /z/ with their correct pro-
nunciation and lisping pronunciation. These information are essentially coefficients
or features extracted using MFCC feature extraction. An algorithm has proposed in
this research paper that will take a sentence as input from a person having lisping
problem, segment the sentence into individual words and identify the lisped words
and rectify it with the correct word and give output of the correct form of sentence.
Quintessentially, it is a speech recognition system for people having this functional
Speech Disorder.
In order to analyze speech, one must have a reference of analysis. To this purpose,
the initial stages of this paper are dedicated to formation of a basis of comparison.
Several audio samples were obtained of different words that comprise the English
language. The individual words’ MFCC coefficients were extracted and among the
58 A. Itagi et al.
3.1 Segmentation
The first part of speech to text conversion is segmenting the input sentences into
individual tokens or words. The audio input is converted into samples after sampling
the audio input at a sampling rate of 8000 Hz. From these samples, each word is
segmented using segmentation techniques. We are using thresholding to segment the
words from the sentence. First a maximum filter is applied on the samples to remove
noise. Figure 1 illustrates the proposed lisp detection and correction algorithm. Ini-
tially, a threshold is set which is: Let, Amax maximum sample value and Amin
minimum sample value
Amax+ Amin
Threshold (1)
4
The consecutive set of values that is higher than the threshold is segmented into a
word and appended to an array. The value of the threshold is adapted by averaging it
with the local threshold of each sword detected. This is how the input is segmented
to samples of individual words.
Figure 2a illustrates the audio file (sentence) and the subsequent figure in it is the
thresholding signal. Figure 2b shows the individual words in the sentence after the
segmentation procedure.
Lisp Detection and Correction Based on Feature Extraction … 59
Fig. 1 Flow diagram of the algorithm for lisp detection and correction
Fig. 2 a Pre-segmentation the audio samples b post-segmentation of the sentence and thresholding.
Individual words
60 A. Itagi et al.
The following procedure is used to extract the MFCC feature (Fig. 3) for the given
speech sample.
3.2.1 Pre-emphasis
In order to remove the noise present in the speech signal input, pre-emphasis filter is
used. Since majority of the noise lies in the low frequency range, pre-emphasis atten-
uates the lower frequency leaving the higher frequencies unchanged. Equation (2)
describes the transfer function of the pre-emphasis filter
1
H(z) (2)
1 − 0.95 ∗ z −1
3.2.2 Framing
Here, speech signal is split into several frames such that each frame is of short time,
so now instead of processing the whole signal all at once, it is processed frame by
frame. The frame size is generally in the range of 0–20 ms.
Windowing is performed to remove the discontinuities from the framed signal and
to dispose the redundant features. Hamming window is used to perform this. The
Hamming filter generation formula is given by
Lisp Detection and Correction Based on Feature Extraction … 61
H(n) 0.54− 0.46 ∗ cos(2 ∗ pi. ∗ n/256) (3)
The basis of performing Fast Fourier Transform is to convert the signal from time
domain to frequency domain.
A Mel filter bank here consists of 20 triangular filters on Mel scale. To convert the
signal to the Mel scale and apply the filters on the magnitude spectrum of the signal.
To compute the Mels for a given frequency, the following approximate formulae is
us
f
Mel(f) S(k) 2595 ∗ log 1 + (4)
700
Discrete cosine transform is applied to obtain the 20 point features. These features
are the MFCC (Mel frequency cepstral coefficients) or acoustic vectors of the voice
input signal Eq. (5) shows the discreet cosine transform, f 1, 2, …, n
1
n−1
π ∗ f ∗ (i + 0.5)
Xf √ xi ∗ cos (5)
n i0 n
Once the Coefficients are obtained, and grouped together, it is observed that:
1. There are too many features to deal with, some being unnecessary on account
of them being higher order expansions of the Discrete Cosine Transform, and
hence unlikely to contain much acoustic information.
2. The DCT properties of de-correlation result in the lower order coefficients having
most information.
3. Reduced Dimensionality helps in improved feature matching.
4. If certain obvious parts of the audio sample are samples containing zero then the
MFCC generation procedure tends to represent them too, as infinity. Analysis
with these terms leads to grave mistakes in calculation.
Once the sorted cepstral coefficients are obtained it is checked for lisp by passing
through the random forest classifier. If the word is classified as a lisp word then the
MFCC values of the word are compared to the bank of stored MFCC coefficients
searched for the highest correlation. The matched word is identified and displayed as
62 A. Itagi et al.
the correct word. The popular methods of matching are correlation, coherence, Chi-
square distance measurement [11], Euclidean Distance measurement, and Mirdist
measurement. The correlation is done by applying the correlation formula popularly
used in statistical analysis of data, or in this case of the coefficients:
n
i (x i − μx ) ∗ yi − μ y
Correlation of x and y (6)
σx ∗ σ y
G x y ( f )2
Cxy (f) (7)
G x x ( f )G yy ( f )
The feature extraction procedure was run for over 50 audio files containing over 50
individual distinct words of English speech that is used every day. 10% of this set
contained the most common lisp words. The random forest classifier was initially
trained with 70% of the training data and tested against the remaining 30% of the
dataset. The error in the output of the classifier was 7.1482%. In this paper, the words
misclassified as non-lisp even if they were lisp are considered Falsely Rejected words
and words which are in fact non-lisp but treated as lisp are Falsely Accepted words.
FRR False Rejection Ratio, FAR False Acceptance Ratio. Table 1 compares the
different FAR and FRR values of different classifiers.
Figure 4a is a graphical representation of the basic principle of the Random Forest
Classifier, accuracy in classification increases with increase in the number of decision
trees. Figure 4b visually claims the opposite as it is the stochastic nature of the
Random Forest Classifier that prevents over fitting of the data.
Lisp Detection and Correction Based on Feature Extraction … 63
Fig. 4 a Number of trees versus accuracy in classification in percentage of the Random forest
classifier. b Number of training audio data versus accuracy in classification in % of 3 classifiers
Fig. 5 Comparison of the three classifiers at a 70% training and 30% testing. b 70% training and
30% testing
Figure 5a and b shows the variation of the three classifiers at 70% training and
80% training, respectively. Table 2 displays the mean accuracy of lisp detection for
the different classification algorithms.
5 Conclusion
Thus, this paper shows the proposed MRF algorithm to detect lisp by means of
MFCC feature extraction and Random Forest Classifier. The proposed algorithm
is able to detect lisp efficiently and correct the lisp in the sentence. The Random
64 A. Itagi et al.
Forest classification algorithm used is giving superior results than other classification
models. The algorithm successfully corrected the lisp words in sentences and this
model can be used in speech to text applications. This research work can be helpful to
develop embedded systems for speech to text for disabled people. This model can be
used in real-time systems that corrects lisp for people with speaking disability. Future
work will be done to use different classification and statistical learning techniques
to classify lisp and correct the words using correlation.
References
Keywords Adaptive noise control · Least mean square algorithm · Step size
Normalized least mean square algorithm · Mean square error
1 Introduction
This algorithm was proposed by Hoff and Widrow in 1959 while studying for recog-
nition of pattern known as an adaptive element in linear, known as Adeline [4, 6].
The LMS algorithm is a stochastic grade that will iterate every tap weight in the filter
in the direction of the gradient of the amplitude square of an error signal with respect
to the every tap weights as shown in Fig. 1 [1].
Generally, LMS algorithm involves the following two steps. First, the filtering step
is approved out with the reference signal using an adaptive filter and then it joins with
the desired signal. The error signal is calculated from the variance of the preferred
signal and output of the adaptive filter y(n). Then, an error signal is introduced in
the LMS algorithm and the filter weights will be modified accordingly. Assuming
that the impulse responses used are modeled by finite impulse response (FIR) filters.
q(n) is the noise that is to be precise and a(n) is the reference signal.
It is the weights of the Active Noise Cancelation controller and its length N and
er(n) is the error signal calculated from the difference of the x(n) desired signal and
the output of the filter z(n).
Step size plays a very significant role in deciding the convergence presentation of
this algorithm. If a value is too less, the convergence rate is very less so is not time
efficient. With the increase in the value of step size the convergence rate increases but
beyond a certain point, the algorithm starts diverging. The LMS algorithm requires
20 times the tap quotient number of algorithms to converge in mean square and 2
times +1 the number of tap coefficient multiplications, which increases linearly with
tap coefficients [1].
68 R. Sil et al.
3 Recursive Least-Square
Recursive Least-Squares (RLS) is additional very popular adaptive filter [7, 8]. RLS
has a better rate of converging compared to LMS. However, it involves computational
complexity. RLS algorithms are efficient and their performance factor depends on
forgetting factor. Forgetting factor for RLS algorithm ranges between 0 and 1. When
it is close to 1 the algorithm has less bad adjustment and high stability but the tracking
power reduces. A smaller value also increases bad adjustments and reduces stability
but can increase the tracking power [9].
In RLS algorithm, the weights of the least square error function or the cos(t)
function are minimized by selecting filter coefficients wn and updating the filter with
the ascend of different data [10]. e(n) the error signal and desired signal are mentioned
in Fig. 2 which is a negative feedback diagram.
An input signal p(n) is defined as
and
As already mentioned, the basic disadvantage of the LMS is that it has a fixed step size
parameter which requires a statistical understanding before adopting filter algorithm.
Factors such as signal power, nature of the noise, amplitude frequency can widely
affect the value chosen. The NLMS algorithm improves the LMS procedure by
calculating the value of the step size with every definite iteration instead of assigning
a fixed value to it [5]. This step size is inversely proportional to the most probable
energy of the instantaneous value of coefficients of the input signal. The recursion
formula for NLMS procedure is
f (n + 1) f (n) + μer (n) p(n)/ p T (n) p(n) (12)
Since NLMS is the revised description of the LMS algorithm, its practical imple-
mentation is quite similar. First, the filtered output is calculated [11].
N −1
z(n) f (N ) p(n − i) f T (n) p(n) (13)
i0
The change in the preferred and output signal gives the error signal.
Then weights of the filtered taps are updated to the following iteration:
5 Proposed Algorithm
The NLMS algorithm is efficient and it is experimentally verified and tested by using
various different noise signals such as active white Gaussian noise, exponential noise,
unit step, ramp, etc. It is observed that the nature and efficiency of the adaptive filter
vary. The error of the means square is calculated between the filtered output and the
input signal as shown in the Fig. 3.
An input signal chosen was cosine with a frequency of 10 kHz and the (µ) step
size value of the adaptive filter was fixed at 0.006. The MSE error varied with each
70 R. Sil et al.
Table 1 NLMS algorithm tested after the addition of noises and their corresponding (MSE)
Type of noise Mean square error (MSE)
White Gaussian 0.0357
cos(t2 ) 0.0917
Unit step 0.2081
Ramp 0.3501
sin(t) 0.0902
exp(t) 0.4167
execution time so an average reading of 20 execution cycles was calculated and the
above results were obtained. Another important observation made is the value of step
size or µ does determine the performance of the algorithm. The adaptive filter output
can be improved by employing Loess non-parametric regression method which uses
merges multiple regression models in a K adjacent neighbor based meta-model [12].
NLMS algorithm tested after the addition of noises and their corresponding MSE is
given in Table 1.
The smoothened graph obtained after the calculation is called Loess graph. A
low-degree polynomial is used employing weighted least square at each point in the
data range. More weight is given to the point near whom the estimated weights lie
and lesser to the points further away. The weight function used is
3
f (n) 1 − |x|3 (17)
The output of NLMS adaptive filter with optimized step size is shown in Fig. 4
and the output after Loess smoothening is used which reduces the noise further as
shown in Fig. 5.
The graph in Fig. 6 shows how MSE varies with a change in the value of step
size(µ) for LMS and Fig. 7 shows how it varies with step size(µ) for NLMS. Table 2
shows the comparison of MSE values when unoptimized NLMS and the NLMS +
Loess algorithm was used over an active white Gaussian channel.
NLMS-LOESS Algorithm for Adaptive Noise Cancelation 71
Fig. 4 a Input signal which is a cosine function. b The NLMS adaptive output with a frequency of
104
Fig. 5 a Input cosine signal. b The NLMS adaptive output plus LOESS smoothening
The optimum value producing the least MSE was 0.009 for LMS and 0.006 for
NLMS so all the previous operations were made at µ 0.006. When the proposed
algorithm was used for voice input, a significantly denoised output was obtained as
shown in Fig. 8.
NLMS in comparison with other algorithms were found out to be more efficient
than the other two discussed algorithms LMS and RLS, and with the optimum coef-
ficients derived in this paper, the efficiency further increases. The optimized NLMS
algorithm works quite well for voice inputs also on the addition of white Gaussian
noise having a considerably low error rate as shown in Table 3.
72 R. Sil et al.
Fig. 7 Graph shows the MSE with increasing value of µ for NLMS algorithm
6 Conclusion
We have presented an algorithm that can denoise a noisy input signal with an opti-
mized weighted adaptive filter. Various existent algorithms have been compared
with the proposed algorithm and an efficient output was derived. Various different
types of additive noises were also added and the flexibility of the algorithm was
observed which altogether improves the overall filter performance. The noise can-
celation performances of the proposed algorithm were found out to be superior and
also preserving the simplicity of the normalized least mean square algorithm.
NLMS-LOESS Algorithm for Adaptive Noise Cancelation 73
Fig. 8 Input voice and its corresponding adaptive output after filtering out the adaptive white
Gaussian noise
Table 3 MSE comparison of the various algorithms discussed over a variety of additive noises
Algorithms Additive noise over cosine signal
White cos t 2 Unit step Ramp sin t et
Gaussian
LMS 0.331 0.243 0.171 0.223 0.250 0.306
RLS 0.340 0.243 0.254 0.367 0.25 0.388
NLMS 0.023 0.181 0.410 0.458 0.232 0.496
Optimized 0.013 0.058 0.108 0.217 0.013 0.273
NLMS
Proposed 0.006 0.032 0.098 0.173 0.01 0.198
References
1. Manikandan GS, Madheswaran M (2007) A new design of active noise feedforward control
systems using delta rule algorithm. ITJ 6:1162–1165
2. Hansen CN (2002) Understanding active noise cancellations, pp 6–10
3. Lee KA, Gan WS (2004) Improving convergence of the NLMS algorithm using constrained
subband updates. IEEE Signal Process Lett 11(9)
4. Pradhan SS, Reddy VE (1999) A new approach to subband adaptive filtering. IEEE Trans
Signal Process 47:655–664
74 R. Sil et al.
5. Slock DTM, Member, IEEE (1993) On the convergence behavior of the LMS and the normalized
LMS algorithms. IEEE Trans Signal Process
6. Haykin S, Widrow B (2003) Least mean square LMS algorithm, pp 30–51
7. Haykin S (2002) Adaptive filter theory, 4th edn. Prentice-Hall, Englewood Cliffs, NJ
8. Benesty J, Huang Y (2003) Adaptive signal processing-applications to real-world problems.
Springer, Berlin, Germany
9. Van Vaerenbergh S, Santamaría I, Lázaro-Gredilla M (2012) Estimation of the forgetting factor
in kernel recursive least squares. In: 2012 IEEE international workshop on machine learning
for signal processing. Accessed 2016
10. Paleologu C et al (2008) A robust variable forgetting factor recursive least-squares algorithm
for system identification. IEEE Signal Process Lett 15
11. Fox J (2010) Nonparametric regression in R: an appendix to an R companion to applied regres-
sion, 2nd edn
12. Radhika et al (2011) Adaptive algorithms for acoustic echo cancellation in speech processing,
vol 7, no 1
Application of Generalized Constrained
Neural Network with Linear Priors
to Design Microstrip Patch Antenna
T. V. S. Divakar
Abstract This paper describes usage of generalized constrained neural network with
linear priors (GCNN-LP) as knowledge based neural network in resonant frequency
calculation of various microstrip configurations. Linear priors can be defined as a
course of previous evidence that unveils a direct in to the features of benefits, such
as variables, free parameters, or their tasks of the models. Recently generalized
constraint neural network with linear priors have been suggested by Hu et al. which
is a step forward for this proposed work. It takes many known priors like equality,
symmetry, ranking, interpolating points, etc., as prior knowledge about the problem.
In this paper, GCNN-LP is applied to estimate resonant frequency of rectangular,
circular, and elliptical microstrip antenna.
1 GCNN-LP-Introduction
Figure 1 shows the architecture of the network. It consists of two sub models. The
first one is a typical RBF neural network and other one is reliant network (Partially
Known Relationship submodel) that conserves the priori data about the problem. This
model is useful in three venerations. First, an extra widespread guesstimate prob-
lem is demarcated in the current presenting method. The restrictions enforced on the
model can be added comprehensive procedures of partially known relationships. Sec-
ond, a two-way coupling assembly is smeared between two submodels, specifically
a “partially known relationship” (PKR) submodel and a “neural network” submodel.
This arrangement offers springiness to epitomize quite a few forms of two intermin-
gling submodels. Third, novel topics plagiaristic from relating two submodels are
exposed, and these are deliberated below.
T. V. S. Divakar (B)
Department of ECE, GMR Institute of Technology, Rajam, India
e-mail: divakar.tvs@gmrit.org
The algorithm can take many types of prior listed in [1, 2]. While training the
algorithm, it takes two types of constraints “soft constraint” and “hard constrain” as
prior. Hard constraints are the ones where the data is completely correct and reliable.
Soft constraints are the ones where the data is noisy [3]. In this part of the work
the performance of the algorithm is investigated considering interpolating points and
monotonicity as priors. The algorithm is briefly presented as follows.
Let, C (xn , ξn ) n 1, 2, 3 . . . .., p where ξn is the rate or the partial derivatives
of the anticipated function for the contribution xn , and p is the number of undeviating
priors. Then GCNN-LP is defined as
min
Err(W ) + γk εk (1)
W k∈EUI
s.t
−εj ≤ xj W − ξj ≤ εj , j ∈ E,
(xi )W − ξi ≥ εi , i ∈ I
εk ≥ 0, k ∈ EUI
1, ϕU 1 (x),
where xj U xj , R xj , U (x) , R (x)
. . . , ϕUh (x)
1, ϕR1 (x), . . . , ϕRp (x) and ϕ(x) are the original radial basis function. WR is the
controlled parameter, which is determined according to prior knowledge and WU . E
and I are the assemblies of indices of likeness and disparity checks εk is the positive
supporting variable to soft limits, and γk is the weight of the variable φ xj εk
defining the trade-off amongst the prior information and the training records. The
details about optimizing above calculation can be found in [4].
Measured data available in open literature is used as interpolating priors. HFSS
is used to generate 200 numbers of patterns to generate monotonicity priors. It is
known that the resonant frequency escalates by decreasing the dimension of the
patch, height and dielectric constant of the substrate. By varying one of the inputs
and keeping other parameters constant, ranks are prepared. These rankings are fed
as monotonicity prior. In each case, h, height of the substrate is varied between 0.2
and 12 mm, εr , dielectric constant between 1 and 12 and dimension between 10 and
Application of Generalized Constrained Neural Network … 77
50 mm during data generation. The HFSS simulations have been automated using
Matlab-HFSS interface. All the antennae are co-axially fed. While simulating, each
antenna has been simulated at 10 positions from the edge to determine least S11
parameter. This approach is used to calculate the operating frequencies of elliptical,
circular and rectangular microstrip antenna.
The circular microstrip patch antenna tends to be slightly smaller than the rectangular
disk [7, 8]. In certain submissions, such as arrays, circular geometries deal some
assured advantages over former configurations. The circular disk can be simply
modified to yield a variety of impedance values, radiation patterns, and frequencies
of action. Depending on the three layer cavity model analysis [9, 10], a modest and
wide-ranging formula for the resonant frequency of TMnm modes of a circular patch
antenna with an air layer is as follows:
ωnm xnm
fnm √ (2)
2π 2π b μ0 εeff
where 1eff is the effective dielectric constant of the layer cavity which is as follows:
ε0 (2hd + ha )
εeff (3)
2hd + ha εr
where ha and hd are heights of air substrate and dielectric substrate materials.
For calculating resonating frequency, above empirical expression is used and
Experimental data available in [8–10] is used as interpolating prior. Inputs of the
network are a, h and εr and resonant frequency as output. The dimension of the
network is 3X18X1.
Table 2 shows resonating frequencies in GHz obtained for cases which are not
included during training. The calculated results are close to measured results.
Application of Generalized Constrained Neural Network … 79
Figure 3 shows the comparison of MSE and number of epochs. From zero to 100
epochs, the error is reducing rapidly and then it is reducing gently till epoch 10,000
which is the optimum value so the training is stopped at this point. Best value of
performance for MSE is at epoch 10,000 where error is near to 0.0001 which can be
observed from Fig. 3.
C
L √ − 2dL (4)
2fr εeff
where
εeff + 0.3 (W/h + 0.264)
dL 0.412h (5)
εeff − 0.258 (W/h + 0.8)
Fig. 5 Fabricated
rectangular patch antenna
3 Conclusions
Three different microstrip patch antenna configurations were analyzed with help of
generalized constrained neural network with linear priors (GCNN-LP) as knowledge
based neural network in resonant frequency calculation. Rectangular patch antenna
was designed and tested with VNA to support the work and got good agreement
with the theoretical results. Linear priors can be defined as a discussion of prior data
that reveals a direct relation to the qualities of benefits and here they are for example
length, width, radius eccentricity, etc. In this paper, GCNN-LP is applied to calculate
resonant frequency of elliptical, circular, and rectangular microstrip antenna.
82 T. V. S. Divakar
References
1. Jacobs JP (2015) Efficient resonant frequency modeling for dual-band microstrip antennas by
Gaussian process regression. IEEE Antennas Wirel Propag Lett 14:337–341
2. Wang F, Zhang QJ (2001) Incorporating functional knowledge into neural networks. IEEE
Trans Antennas Propag 37(4):262–269
3. Shavlik JW (2006) An overview of research at Wisconsin on knowledge-based neural networks.
IEEE Trans Antennas Propag 6(3):65–69
4. Modi AY, Mehta J, Pisharody N (2013) Synthesis of elliptical patch micro-strip antenna
using artificial neural network. In: 2013 international conference on microwave and photonics
(ICMAP), pp 1–3, Dec 2013
5. Kumprasert N (2000) Theoretical study of dual-resonant frequency and circular polarization
of elliptical microstrip antennas. In: IEEE antennas and propagation society international sym-
posium, vol 2, pp 1015–1020, July 2000
6. Gangwar SP, Gangwar RPS, Kanaujia BK, Paras (2008) Resonant frequency of circular
microstrip antenna using artificial neural networks. Indian J Radio Space Phys 37:204–208
7. Dahele JS, Lee KF (1993) Effect of substrate thickness on the performance of circular-disk
microstrip antenna. IEEE Trans Antennas Propag AP-31:358–360
8. Yano S, Ishimaru A (1981) A theoretical study of the input impedance of a circular microstrip
disk antenna. IEEE Trans Antennas Propag AP-29:77–83
9. Chew WC, Kong JA (1981) Analysis of a circular microstrip disk antenna with a thick dielectric
substrate. IEEE Trans Antennas Propag AP-29:68–76
10. Singh BK (2015) Design of rectangular microstrip patch antenna based on artificial neural net-
work algorithm. In: 2nd international conference on signal processing and integrated networks
(SPIN), pp 6–9, Nov 2015
11. Garg R, Bhartia P, Bhal I, Lttipiboon A (2000) Microstrip antenna design handbook, Chap 5.
Archtech House, Boston, London
12. Qu H-B, Wang Y (2005) Associating neural networks with partially known relationships for
nonlinear regressions. Springer, Berlin, Heidelberg, pp 737–746
Dual Band MIMO Antenna
with Reduced Mutual Coupling Using
DGS
Abstract In the current 4G technology, the primary requirement is high data rate.
This can be achieved easily by using MIMO antenna system. In MIMO, when a
number of antennas are placed in the small area, then mutual coupling comes into
the picture. The current research focuses on mitigating mutual coupling in MIMO
(Multi Input and Multi Output) antenna systems. MIMO antenna with enhanced
isolation is proposed. In this work, Defected Ground Structure (DGS) is used to
reduce mutual coupling. The antenna operates at 3.2 and 5.2 GHz. Mutual coupling
at 3.2 GHz is −36.42 dB and at 5.2 GHz is −30.26 dB. It is very much suitable for
both WiMAX and WLAN applications.
1 Introduction
For high data rates and powerful efficient information, transmission carried in wire-
less communication uses MIMO. The change in channel limit with expanding the
number of antennas was first confirmed [1]. The essential prerequisites of MIMO
antenna are that the receiving antennas must give various gathering at littler spacing.
When antennas are firmly put, the electromagnetic waves of antennas meddle with
each other bringing about signal loss. Mutual coupling indicates the amount of inter-
ference among the antennas and the fundamental goal of any reception apparatus
plan for a MIMO antenna is to decrease this mutual coupling [2]. The impact of
mutual coupling on MIMO wireless channel limit is examined in [3]. The essential
hotspot for this mutual coupling is pondered in [4, 5]. DGS [6] and decoupling tech-
nique [7] are used to minimize mutual coupling. The diminishment is accomplished
with multifaceted nature of the structures. Mutual coupling alters antenna matching
characteristics and radiation pattern. This is severe when the antennas are near each
other [8]. Surface currents can be a more serious issue, particularly when antennas
are firmly placed. Microstrip patch antennas are outstanding in types of uses, like
mobile, airborne, and satellite applications [9].
DGS is a defect in a ground plane which exasperates the shield current distribution.
As a result, the capacitance and inductance of transmission line will change [10].
Section 2 mentions about antenna geometry, Sect. 3 deals with results and finally
conclusion is given in Sect. 4.
2 Antenna Design
Two similar microstrip-fed antennas with DGS are as shown in Fig. 1. The antennas
operate at 3.2 and 5.2 GHz. The edge-to-edge separation is 7 mm (0.07λ0). The
substrate is FR4 with a thickness of 1.6 mm and relative permittivity of 4.3. The
dimensions of the single antenna are taken as 25 × 30 mm2 . The substrate dimensions
are taken as 50 × 30 mm2 . For the patch, copper material is chosen.
This section deals with simulation results. The simulation results for the scattering
parameters S11 and S21 are as shown in Fig. 2. S11 shows that antenna resonates
at 3.2 and 5.2 GHz. The distance between two elements and surface currents is the
key factor that decides the amount of mutual coupling [11]. The technique used for
mutual coupling reduction in this paper defects Ground Structure. Due to a defect in
the ground plane, it prevents propagation of surface currents, which causes mutual
coupling reduction. From Fig. 2, it is also evident that mutual coupling S21 at 5.2 GHz
is reduced by −36.42 dB and at 3.2 GHz −30.26 dB.
The E-plane and H-plane radiation patterns of the antenna at 3.2 and 5.2 GHz are
as shown in Fig. 3.
Dual Band MIMO Antenna with Reduced Mutual Coupling … 85
Fig. 1 Geometry of the two microstrip antennas with DGS a top view b bottom view. Ws 25 mm,
Ls 30 mm, Wp 18 mm, Lp 21 mm, S1 12.02 mm, R 10 mm, R1 17.30 mm, S2 6 mm,
WF 3 mm, Lg 30 mm, Wg 21 mm
-10
-20
S-PARAMETERS
-30 S11
S21
-40
-50
-60
-70
-80
3 3.5 4 4.5 5 5.5 6 6.5
FREQUENCY
30 150 30
150 30
20 5
10
dB------->
dB------->
180 0
180 0
210 330
210 330
270 270
DEGREES-----> DEGREES----->
150 20 30
150 2 30
10
1
dB------->
dB------->
180 0 180 0
270 270
DEGREES-----> DEGREES----->
0.025
0.015
0.01
0.005
0
3 3.5 4 4.5 5 5.5 6 6.5
FREQUENCY
ECC and diversity gain are inversely related. Diversity gain will be better if the
correlation coefficient is low. The diversity gain is 9.99 dB and it is shown in Fig. 5.
88 P. Srinivasa Rao et al.
10.05
10
9.95
DIVERSITY GAIN
9.9
9.85
9.8
9.75
9.7
1 2 3 4 5 6 7 8 9 10
FREQUENCY
Fig. 5 Diversity gain for the proposed two elements MIMO antenna
4 Conclusion
In this paper, a novel two element MIMO antenna is developed. The antenna resonates
at 3.2 and 5.2 GHz with the reduced mutual coupling of –36.42 and −30.26 dB. The
proposed MIMO antenna has ECC of 0.0002 at 3.2 GHz and 0.0004 at 5.2 GHz. It also
has a diversity gain of 9.99 dB. It is suitable for WiMAX and WLAN applications.
References
7. Park B-Y, Choi J-H, Park S-O (2009) Design and analysis of LTE MIMO handset antenna with
enhanced isolation using decoupling technique. In: International symposium on antennas and
propagation (ISAP 2009), Bangkok, Thailand, 20–23 October 2009, pp 827–830
8. Farsi S, Aliakbarian H, Schreurs D, Nauwelaers B, Vandenbosch GAE (2012) Mutual coupling
reduction between planar antennas by using a simple microstrip U-section. IEEE Antennas
Wirel Propag Lett 11
9. Ahmed MI, Sebak A, Abdallah EA (2012) Mutual coupling reduction using defected ground
structure (DGS) for array applications. IEEE. ISBN 978-1-4673-0292-0/12
10. Weng LH, Guo YC, Shi XW, Chen XQ (2008) An overview on defected ground structure. Prog
Electromagn Res B 7:173–189
11. Talha MY, Babu KJ, Aldhaheri RW. Design of a compact MIMO antenna system with reduced
mutual coupling. Int J Microw Wirel Technol. https://doi.org/10.1017/s1759078714001287
12. Blanch S, Romeu J, Corbella I (2003) Exact representation of antenna system diversity Perfor-
mance from input parameter description. Electron Lett 39(9):705–707
13. Zhang S, Pedersen GF. Mutual coupling reduction for UWB MIMO antennas with a wideband
neutralization line. IEEE Antennas Wirel Propag Lett. https://doi.org/10.1109/lawp.2015.243
5992
14. Kharche S, Shrikanth Reddy G, Gupta RK, Mukherjee J (2016) Mutual coupling reduction
using shorting posts in UWB MIMO antennas. In: Proceedings of the Asia-Pacific microwave
conference 2016
Nonlinear Signal Processing Applications
of Variants of Particle Filter: A Survey
Abstract Many applications of engineering require the state estimation of the real-
time systems. The real-time dynamic systems are normally modeled as discrete time
state space equations. The behaviors of the state space equations of many of the
dynamic systems are nonlinear and non-Gaussian. Particle filter is one of the methods
used for the analysis of these dynamic systems. In this review paper, many modified
variants of particle filter algorithms and its application to different dynamic systems
are discussed. State vector estimation using modified variants of particle filter was
discussed and compared with the other standard algorithms.
1 Introduction
Recent years particle filter (PF) is very widely used in many of the engineering appli-
cations. PF can deal with the nonlinear and non-Gaussian problems very effectively
and hence the system analysis can be made more accurate [1]. PF is an imple-
mentation of the recursive Bayesian filter using Monte Carlo method. Monte Carlo
method was not common in earlier years due to its high computational complexity.
Monte Carlo method was initially applied in glowing polymer area, and then it got
expanded to science and various engineering fields. Development in the computers
led to the active research in PF. State space and observation model are the main parts
of PF based signal processing. Extended Kalman Filter (EKF) was the first technique
introduced to deal with nonlinear and non-Gaussian problems in which Taylor series
based linearization is done [2], but was not so effective in high nonlinearities which
lead to the discovery of particle filter. Particle filtering technique starts with particle
generation and then comes the weight updation which is also called as sequential
importance sampling, then a check for a number of particles will be conducted and
if the count is less than a particular threshold value, resampling is done to avoid the
degeneracy problem. In resampling, the particles with lower weights are discarded
and with higher weights are replicated. An unscented particle filter (UPF) is one
of the improved versions of particle filter where the unscented transform is used for
generation of an initial set of particles and it has got a better performance over PF [3].
Many other advancements in particle filter and its various applications are discussed
in the next section followed by conclusions.
Extended Particle Filter (EPF) based dynamic state estimation for synchronous
machines is proposed in [4]. Rotor angle and speed are the two variables that deter-
mine the dynamic status of a machine. Phasor measurement unit data (PMU) is used
here for the estimation of the dynamic states. The basic particle filter is made more
robust by the proposed extended particle filter through particle dispersion inflation
and iterative sampling. A performance comparison is made by applying EKF, PF, and
UKF in dynamic state estimation and the results show that the EPF has got higher
accuracy against the modeled noise.
Airborne vehicle tracking using advanced particle filtering in urban areas is pro-
posed in [5]. Traffic analysis can be made more efficient if vehicle trajectories from
airborne image sequences are available. Particle cloud spreading in each vehicle
search space is controlled by presenting an adaptive motion model. Tracking stabil-
ity is improved by a spatiotemporal particle guiding approach. A template update
strategy is used for handling the vehicles appearance changes. Tracking using stan-
dard PF and advanced PF is done and compared which shows the better performance
of advanced particle filter. The number of correctly tracked vehicles using PF and
advanced PF for different frame numbers is shown in Table 1, from which it can be
inferred that advanced PF gives better tracking.
Particle filter based estimation of passengers in a metro system is proposed in [6].
This method is proposed to improve the current system performance of the metro
system and thereby the user is satisfied. The number of passengers aboard the train
and waiting in the station is estimated using PF. Dwell time and no of passengers
entering the station are the measurement values taken. The result shows that the
PF-based estimation gives a higher accuracy in estimating the variables and thereby
improving the system performance to a higher level. Dynamic states estimation
of all generators in a power system using particle filter is proposed in [7]. In this
approach, each estimation module is considered to be independent and only the local
measurements are used. The numerical part is made simple by using the PF algorithm
in this application. Estimation is made smooth and highly accurate in the presence of
noise on using PF algorithm. The time plots in this paper show the higher accuracy
of PF in estimating the states of all generators in a power system.
In [8], a particle filters approach for estimating different stages and key dates in
rice crops. The result in this paper shows that PF is the reliable method for inferring
such different stages than EKF. Current stage retrieval and the estimation of crop
event dates like sowing, etc., are the main applications tested in this paper. An error
less than 1 day in parcels was found in estimating the sowing date and the test was
conducted also for 786 parcels and an error lower than 5 days in the estimation of the
fifty percent of the parcels were obtained and only less than 10 days for 85% of the
parcels. Fault detection using Intelligent Particle Filter (IPF) for a nonlinear system is
proposed in [9]. Particle impoverishment that occurs in PF can lead to inaccurate state
estimation, this problem can be avoided by using the proposed algorithm IPF. Genetic
operators based strategy in IPF improves the diversity of particles. Experiments done
in the paper shows that IPF has got a higher estimation accuracy on comparing with
generic PF. Higher accuracy is shown by IPF in a real-time three tank fault detection.
A Beam Former Particle Filter (BPF) algorithm is proposed in [10] for the EEG
sources localization. EEG brain source spatial locations and waveforms are esti-
mated using multicore BPF. The proposed algorithm takes the advantage of both
deterministic and Bayesian inverse problem algorithms and hence the accuracy of
the estimate is improved. Experiments are done taking the real EEG signals and on
applying BPF a correct recovery of brain activity is seen. Improved Particle Filter
(IPF) based algorithm for multiple vehicles tracking in complicated road environ-
ments is proposed in [11]. A new process dynamic distribution is included in IPF for
better tracking. Area normalization of MCRP (minimum circumscribed rectangle of
particles) distribution is done here to detect the vehicle disappearance and handling
mechanism. Occlusion detection is also done in this method. An accurate multiple
vehicle tracking is obtained by applying the proposed algorithm in different datasets.
Multiple objects tracking accuracies and precision (MOTA and MOTP) comparison
between PF-mean shift adaptive algorithm (Method1), online HOG (Histogram of
oriented gradient) method (Method2), trained PF (Method3), and IPF are done in
Table 2, from which it can be inferred that IPF has got a better performance than
other methods.
Salient object detection in video using particle filter is proposed in [12]. Spa-
tiotemporal saliency color and maps are used as cues here. Standard datasets are
taken and experiments are done on it and the result shows a higher performance of
the proposed method in video segmentation and feature extraction. To deal with noise
94 P. Sudheesh and M. Jayakumar
measurement model, the curvature of the surface is taken into account for a better
performance. Likelihood function linearization is done to decrease the communi-
cation overhead. Simulation results show the better performance of the proposed
algorithm on comparing with a centralized filter. Kalman particle filter (KPF) based
estimation with symmetric alpha-stable noise is proposed in [18]. Here at first, PF
is applied for the parameter coarse estimation and then KF is applied for achieving
a better estimation. Comparison of KPF is done between PF, SPF, EKF, and UKF
for periodic signal parameter estimation. The proposed algorithm has also applied
to high-frequency source localization. RMSE of location estimation for different
GSNR on using KPF and SPF algorithms is shown in Table 5. Tables 6 and 7 show
the NRMSE of amplitude and phase at different GSNR respectively, from which it
can be concluded that KPF has got a better performance in all GSNRs.
Particle filter based estimation of hidden autoregressive moving average (ARMA)
process is proposed in [19]. Estimation is done for known model order in which at
first the parameters are considered known, then unknown. ARMA process has Gaus-
sian noise with unknown variance. All the unknown parameters are not estimated
instead treaded with Rao-Blackwellization. Performance of particle filter in esti-
mating ARMA process is demonstrated with high computer simulations. Adaptive
particle filter based traffic prediction is proposed in [20]. A number of particles
are adjusted adaptively according to the particle states in this algorithm. Simulation
results show the better performance of the proposed algorithm which leads to the
higher accuracy in traffic prediction. Application of particle filter in MIMO channel
estimation is proposed in [21]. Simulation results show that PF algorithm can give
a better estimate of the channel. MSE versus SNR is plotted for PF and EKF based
estimation. Particle filter has low errors at low SNR on comparing with EKF which
can be inferred from Table 8.
96 P. Sudheesh and M. Jayakumar
An object tracking based on unscented particle filter is proposed in [22]. Here, the
experiments are done in a normal office and the results obtained in applying UPF are
compared with the condensation technique. Condensation technique based results are
affected by the background objects but UPF results are showing more accurate and
precision. A combination of unscented transform and particle filtering for scheduling
sensors multiple steps ahead is proposed in [23]. Improving the system performance
and reducing the resource cost are the main advantages of sensor scheduling. Here,
the prediction of expected cost multiple steps is done ahead, to achieve this, several
particles are introduced for each of sensors and will select the sequence which reduces
the predicted cost. Arbitrary cost functions are incorporated in this algorithm. This
algorithm can be used for other applications like localization in sensor networks [24,
25].
3 Conclusion
References
1. Djurić PM, Kotecha JH, Zhang J, Huang Y, Ghirmai T, Bugallo MF, Miguez J (2003) Particle
filtering. IEEE Signal Process Mag 5:19–38
2. Ignatius G, Murali Krishna Varma U, Krishna NS, Sachin PV, Sudheesh P (2012) Extended
Kalman filter based estimation for fast fading MIMO channels. In: IEEE international confer-
ence on devices, circuits and systems (ICDCS), pp 466–469
3. Van Der Merwe R, Doucet A, De Freitas N, Wan E (2000) The unscented particle filter. In:
NIPS, pp 584–590
4. Zhou N, Meng D, Lu S (2013) Estimation of the dynamic states of synchronous machines
using an extended particle filter. IEEE Trans Power Syst 28(4):4152–4161
5. Szottka I, Butenuth M (2014) Advanced particle filtering for airborne vehicle tracking in urban
areas. IEEE Geosci Remote Sens Lett 11(3):686–690
Nonlinear Signal Processing Applications of Variants … 97
6. Reyes F, Cipriano A (2014) On-line passenger estimation in a metro system using particle filter.
IET Intell Transp Syst 8(1):1–8
7. Emami K, Fernando T, Lu HH-C, Trinh H, Wong KP (2015) Particle filter approach to dynamic
state estimation of generators in power systems. IEEE Trans Power Syst 30(5):2665–2675
8. De Bernardis CG, Vicente-Guijalba F, Martinez-Marin T, Lopez-Sanchez JM (2015) Estimation
of key dates and stages in rice crops using dual-polarization SAR time series and a particle
filtering approach. IEEE J Sel Top Appl Earth Obs Remote Sens 8(3):1008–1018
9. Yin S, Zhu X (2015) Intelligent particle filter and its application to fault detection of nonlinear
system. IEEE Trans Ind Electron 62(6):3852–3861
10. Georgieva P, Bouaynaya N, Silva F, Mihaylova L, Jain LC (2016) A beamformer-particle
filter framework for localization of correlated EEG sources. IEEE J Biomed Health Inform
20(3):880–892
11. Liu P, Li W, Wang Y, Ni H (2014) On-road multi-vehicle tracking algorithm based on an
improved particle filter. IET Intel Transp Syst 9(4):429–441
12. Muthuswamy K, Rajan D (2015) Particle filter framework for salient object detection in videos.
IET Comput Vis 9(3):428–438
13. Han X, Lin H, Li Y, Ma H, Zhao X (2015) Adaptive fission particle filter for seismic random
noise attenuation. IEEE Geosci Remote Sens Lett 12(9):1918–1922
14. Khalili A, Soliman AA, Asaduzzaman M (2015) Quantum particle filter: a multiple mode
method for low delay abrupt pedestrian motion tracking. Electron Lett 51(16):1251–1253
15. Zhang X-P, Khwaja AS, Luo J-A, Housfater AS, Anpalagan A (2015) Multiple imputations par-
ticle filters: convergence and performance analyses for nonlinear state estimation with missing
data. IEEE J Sel Top Signal Process 9(8):1536–1547
16. Lin SD, Lin J-J, Chuang C-Y (2015) Particle filter with occlusion handling for visual tracking.
IET Image Proc 9(11):959–968
17. Yu JY, Coates MJ, Rabbat MG, Blouin S (2016) A distributed particle filter for bearings-only
tracking on spherical surfaces. IEEE Signal Process Lett 23(3):326–330
18. Xia N, Wei W, Li J, Zhang X (2016) Kalman particle filtering algorithm for symmetric alpha-
stable distribution signals with application to high frequency time difference of arrival geolo-
cation. IET Signal Proc 10(6):619–625
19. Urteaga I, Djurić PM (2017) Sequential estimation of hidden ARMA processes by particle
filtering—Part I. IEEE Trans Signal Process 65(2):482–493
20. Wen M, Yu W, Peng J, Zhang X (2014) A traffic flow prediction algorithm based on adaptive
particle filter. In: The IEEE 26th Chinese control and decision conference, pp 4736–4740
21. Muthukrishnan MG, Sudheesh P, Jayakumar M (2016) Channel estimation for high mobility
MIMO system using particle filter. In: International conference on recent trends in information
technology (ICRTIT)
22. Rui Y, Chen Y (2001) Better proposal distributions: object tracking using unscented particle
filter. In: IEEE computer society conference on computer vision and pattern recognition, PP
786–793
23. Chhetri AS, Morrell D, Papandreou-Suppappola A (2004) The use of particle filtering with the
unscented transform to schedule sensors multiple steps ahead. In: IEEE international conference
on acoustics, speech, and signal processing, pp 301–304
24. Megha SK, Ramanathan R (2017) Impact of anchor position errors on WSN localization using
mobile anchor positioning algorithm. In: International conference on wireless communications,
signal processing and networking
25. Sreeraj SJ, Ramanathan R (2017) Improved geometric filter algorithm for device free local-
ization. In: International conference on wireless communications, signal processing and net-
working
Evaluating the Effectiveness of Visual
Techniques Methodologies for Cerebral
Palsy Children and Analyzing the Global
Developmental Delay
P. Illavarason (B)
Faculty of Information and Communication Engineering, CEG, Anna University, Chennai, India
e-mail: illavarason.p@gmail.com
J. Arokia Renjit
Department of CSE, Jeppiaar Engineering College, Chennai, India
e-mail: dr.arokiarenjith@gmail.com
P. Mohan Kumar
Department of IT, Jeppiaar Engineering College, Chennai, India
e-mail: mohankumarmohan@gmail.com
1 Introduction
Cerebral Palsy (CP) is a disability of movement disorder and happens during the
beginning stage of childhood. Cerebral which is nothing but the cerebrum, which
means the brain affected area, Palsy which is known as the nonmoving parts of the
body. The important aspects of a CP Children Growth in one or more areas are
considered as delayed developmental which includes the gross motor functions and
so on.
The Global Delayed Development (GDD) represents one of the under domain
areas of delayed developmental in which there is a delay of more than the Standard
Deviation from the mean in many areas of development. Children with CP includes
the GDD have a high incidence of ocular abnormalities as compared to normal
healthy children [1]. The strabismus is the commonest oculomotor abnormalities
associated with CP Children [2]. It is an important and often treatable cause of visual
impairment worldwide [3].
This study will add further evidence and emphasis to the sparse literature of
ophthalmic abnormalities in children with CP and DD [4]. The evaluation of children
with CP and GDD is a part of their routine examination.
The highlights of the proposed system, evaluate the effectiveness of visual pro-
cess techniques methods which is related to functional question’s assessment of CP
children, and also we recommended that the continuous work process of routine
day-to-day training visual activities for the CP children which improves the visual
functioning in CP children. As the result, Visual Therapy provide the cognitive train-
ing to the individual CP children in a sequential manner for the rehabilitation process.
2 Related Work
200 patients in the age group of 8 months to 21 years with CP children have done
the ophthalmological examination. Those represent the 110 males and 90 females
[5]. The children with CP, whereas 40/200 (20%) had the hypertropia condition.
Strabismus is present in 78/200 (39%) of children, 44 of the children had exotropia
condition, and 34 of the children had esotropia condition. Normal pupillary reactions
were seen in 188/200 (94%) of the patients. Other ocular abnormalities included disk
pallor with sluggish pupillary reactions is 11, congenital glaucoma is 1, developmen-
tal cataract is 5/200, and horizontal jerky nystagmus that is involuntary eye movement
in 11 (5.5%) patients [5].
The 664 children with learning disabilities (mean IQ 45.4) were also assessed.
The oculomotor abnormalities are seen in 238/664 (45.3%) children. The strabismus
(squint) is present in 82 (15.7%), involuntary movement of eyes in 35 (5.7%), an opti-
cal disorder of atrophy in 33 (6.4%), and eye diseases is in 12 (2.4%) children’s with
CP. 60 of the children’s vision improved with refraction [6]. The sum of 135 (50.7%)
CP children with a diagnosis of perinatal diseases had oculomotor problems [6].
Evaluating the Effectiveness of Visual Techniques … 101
Oculomotor Impairment
Nystagmus
Strabismus
46 of the CP children between the age group of 2 and 12 years were examined for
ocular abnormalities. Ocular abnormalities are seen in 35 CP participants. Strabismus
(Squint) is seen in 31.8% of the children. Myopia (Short Sight) is seen in 40.4%
children, hypertropia squint in 20%, and astigmatism error in 10%. Optic atrophy is
seen in 20% participants. Nystagmus is seen in 11.4%, chorioretinal degeneration in
14.3%, and optic hypoplasia in 5.7%. Cortical visual impairment is seen in 51.4%
of the children [7].
The major percentage of 98.8% (1119) delayed developmental children with CP
in the other county was examined. From that, 81% (923) has done their ophthalmo-
logical examination. Visual impairment of prenatal condition is 54 children, perinatal
condition is 29 children, and the postnatal condition is 7 children [2].
The majority of CP patients are affected by the eye problem, so here, they also
examine the practical evaluating the effectiveness by vision-oriented task performing
and by this way, vision improvement is much more and this will generate the new cells
in the brain and also reconstructed the damaged tissue by vision therapy rehabilitation
process. Figure 1 illustrates the overall most common problem that is faced by the
CP children related to eye disease.
One of the major problems of CP patient in related to vision such as squint eye
known as the strabismus (misalignment of eyes). Figure 2 represents the strabismus
eye conditions problems of CP patient.
Normal Eye
Fig. 2 The overall illustration of oculomotor abnormalities of strabismus (squint eye) representa-
tions of CP children
these, they are able to detect the pupillary reflections. Majority of the CP children is
affected by vision dysfunction, so this proposal deals with the visual assessment of
children with CP. Vision rehabilitation improves the person to manage communica-
tion, education, day-to-day activities, and leisure activities. The behavioral scale is
designed to provide the information about the functional skills of the person with dis-
abilities for the purpose of individualized programmed planning. By vision therapy,
it is able to improve reaction time and also improve cognitive abilities of children’s
with cerebral palsy. Table 1 illustrates the Demographic Information and Practical
Evaluation Effectiveness of CP Children (n 30).
The most important aspects of functional question’s, are based on the Basic visual
assessment and task-oriented assessment. As a result, this will improve the day-to-
day routine visual performance activities of CP children. The 45 items questionnaires
were implemented in this study based on the functional assessment of vision. The
visual functioning assessment of a functional vision of the CP children is not a
straightforward procedure. It involves the careful observations of visual behavior
of the child under different conditions. The functional assessment is a continuous
ongoing process. Every time the resource has to prepare the new set of itemsv so as
Evaluating the Effectiveness of Visual Techniques … 103
Table 3 A significant difference in visual assessment between Cerebral Visual Impairment (CVI)
and Optic Atrophy (OA) group in the task-oriented skills
CVI (n 20) OA (n 10)
μ σ μ σ
VCS 2.179 1.182 1.949 1.654
(1.603–2.519) (1.249–2.749)
Task-oriented 1.749 0.949 1.794 1.124
visual skills (1.374–2.149) (1.334–2.254)
Vision-oriented 1.848 0.289 1.894 0.374
functioning (1.336–2.554) (1.274–2.589)
tion Assessment daily by dividing the two subclusters, one is a cluster of CP vision
problem and another is cluster of optical disorder atrophies.
The surveillance is the identification of the risk factor for DD with CP.
That surveillance includes,
The deficiency disorder of CP children medical evaluation includes the major
and minor congenital abnormalities. Eye Exam Evaluation is performed for the CP
children with poor eye tracking, strabismus, and nystagmus. Ear Exam Evaluation is
performed for the CP children with recurrent and Chronic CP and Children Neuro-
logic Exam Evaluation is performed for the CP children muscles with tone, symmetry,
reflexes, and strength.
The outcome of screening represents, the physician recommends the Formal
Screening at 9th, 18th, and 24th months during the Routine Surveillance. Those
CP children who failed in the screening test, they require the additional assessment
and evaluation.
Evaluating the Effectiveness of Visual Techniques … 105
5 Conclusions
The proposed which is developed and implemented with a valid Functional Ques-
tionnaire of CP Children, also determined and assess the oculomotor abnormalities
difference in CP children. Vision Questionnaire is a process of estimating the visual
efficiency of CP children with a vision problem. Visual efficiency differs with each
individual. So, it is absolutely necessary to test every individual who is with the visual
problem. The procedures of Vision Screening at different stages are namely, distance
and near vision acuity, visual field, contrast, and color vision. The common vision
causes of CP children such as strabismus, nystagmus, astigmatism, and diplopia.
The Functional Questionnaire assessment and the continuous routine work process
of training activities of Surveillance and Screening Process which promotes the visual
Functioning gives the training of the individual in a sequential manner. From these,
we concluded that Cognitive Functional Vision may improve with Surveillance and
Screening Process. The Functional Question’s assessment and the continuous ther-
apy process of the day-to-day training activities which results in the Improvement
of the individual in a proper manner. From this, we concluded that the Functional
Question related to vision may be improved with training therapy with cognitive
abilities.
Conflict of Interest The authors declare that there is no conflict of interests regarding the publi-
cation of this paper.
References
1. Paediatric Clinics of North America (2008) Developmental disabilities part one, vol 55, pp 5–12
2. Nielsen LS, Skov L, Jensen H (2007) Visual dysfunctions and ocular disorders in children with
developmental delay. Prevalence, diagnoses and aetiology of visual impairment. Acta Ophthal-
mol Scand 85:149–156
3. Buckley E, Seaber JH (1981) Dyskinetic strabismus as a sign of cerebral palsy. Am J Ophthal
91:652–657
4. Govind A, Lamba PA (1988) Visual disorders in cerebral palsy. Indian J Ophthalmol 36:88–91
5. Katoch S, Devi A, Kulkarni P (2007) Ocular defects in cerebral palsy. Indian J Ophthalmol
55:154–156
6. Gogate P, Soneji FR, Kharat J, Dulera H, Deshpande M, Gilbert C (2011) Ocular disorders in
children with learning disabilities in special education schools of Pune, India. Indian J Ophthal-
mol 59:223–228
7. Elmenshawy AA, Ismael A, Elbehairy H, Kalifa NM, Fathy MA, Ahmed AM (2010) Visual
impairment in children with cerebral palsy. Int J Acad Res 2(5):96–103
Design of Dual-Band MIMO Antenna
for LTE 2500, Wimax, and C-Band
Applications to Reduce the Mutual
Coupling
Abstract This article demonstrates the dual-band MIMO antenna using neutral-
ization line, defected ground structure, and parasitic elements for enhancing the
mutual coupling. These structures are formed by inserting different techniques in
between the two microstrip patch antennas. When arranging these types of struc-
tures are inserted in the middle of two antennas and a fringe-to-fringe spacing is
7 mm (0.06λ0 ), exceeding beyond −20 dB mutual coupling is obtained over the
entire bandwidth range from 2.4 to 2.8 GHz and 5.8 to 6.6 GHz with S11 and S12
less than −10 dB. Moreover, the mutual coupling reaches beyond −35 dB over the
entire band of 2.4–2.8 GHz and 5.8–6.6 GHz frequencies. By using this type of tech-
niques, it provide better results compared to previous works which produces greater
than −30 dB in the entire wider bandwidth. Due to this technique improving the
impedance bandwidth as well as radiation characteristics of dual-band antenna.
1 Introduction
parameters, mutual coupling (isolation), VSWR, and directivity are shown in Figs. 2,
3, 4, and 5. Finally, a differentiation table with other existing methods is given in
Table 1. Figure 3 shows the improvement of mutual coupling by applying different
types of methods inserted between the microstrip MIMO antennas. After inserting
these procedures, there is an improvement in the bandwidth from 2.3 to 2.8 GHz is
obtained considerably for simulation S11 ≤ −10 dB and S12 ≤ −20 dB. From 2.3 to
2.8 GHz, the simulated S12 is minimized below −25 dB, which covered WiMAX and
LTE 2500 bands and 5.8–6.4 GHz, the simulated S12 is minimized below −30 dB,
which covered C-band applications. ECC Comparison graph is shown in Fig. 6.
Figure 7 shows co and cross polarization at 2.6 GHz and Fig. 8 shows co and cross
polarization at 6.1 GHz.
In the simulation, we follow that one radiator (antenna) is excited as well as
and the other radiator is terminated by 50 load. The radiation pattern is observed
110 K. Vasu Babu and B. Anuradha
where ηtotal is the amount of total efficiency of the dual-band design radiator and
ηradiation is the radiation efficiency of the dual-band design radiator. As observed
that the isolation of −20 dB impedance bandwidth from 2.3 to 2.8 GHz and 5.8
to 6.2 GHz, the efficiency-enhancing by using MIMO antenna 85%, neutralization
method 72%, defected ground structure 82% and using parasitic elements 78% is
112 K. Vasu Babu and B. Anuradha
Fig. 7 Co and cross polarization at 2.6 GHz a MIMO b neutralization line c DGS d parasitic
elements
Design of Dual-Band MIMO Antenna … 113
Fig. 8 Co and cross polarization at 6.1 GHz a MIMO b neutralization line c DGS d parasitic
elements
114 K. Vasu Babu and B. Anuradha
attributed. It should be noticed that a better mutual coupling means the loss is main-
tained less because this can happen in the parameter of isolation consist multiple
antenna represents the interaction of electromagnetic interference of the antennas.
4 Conclusions
In this paper, a dual-band MIMO antenna, neutralization line, defected ground struc-
tures (DGS), and using parasitic elements has been proposed here. The operating
mechanism of dual-band antenna and simulation results of the antenna has been
presented. By introducing these methods like neutralization line, DGS, and using
parasitic elements, etc., the mutual coupling performance has been improved. Com-
pared with the previous works, the dual-band MIMO antenna has a mutual coupling
of −37 dB at a frequency 2.6 GHz and has a mutual coupling of −48 dB at a frequency
6.1 GHz with S11 and S12 reduces less than −10 dB when a separation distance of
edge-to-edge 0.06λ0 at two bands of resonant frequencies.
References
1. Xue C-D, Zhang XY (2017) MIMO antenna using hybrid electric and magnetic coupling for
isolation enhancement. IEEE Trans Antennas Propag 65:5162–5169
2. Lee J-Y, Kim S-H, Jang J-H (2015) Reduction of mutual coupling in planar multiple antenna
by using 1-D EBG and SRR structures. IEEE Trans Antennas Propag 63:4194–4198
3. Chang S-C, Wang Y-S, Chung S-J (2008) A decoupling technique for increasing the port
isolation between two strongly coupled antennas. IEEE Trans Antennas Propag 56:3650–3658
4. Wang Y, Du Z (2014) A wideband printed dual-antenna with three neutralization lines for
mobile terminals. IEEE Trans Antennas Propag 62:1495–1500
5. Ketzaki DA, Yioultsis TV (2013) Metamaterial-based design of planar compact MIMO
monopoles. IEEE Trans Antennas Propag 61:452–455
6. Shoaib S, Shoaib I, Shoaib N, Chen X, Parini CG (2014) Design and performance study of a
dual-element multiband printed monopole antenna array for MIMO terminals. IEEE Antennas
Wirel Propag Lett 13:329–332
7. Wu C-H, Zhou G-T, Wu Y-L, Ma T-G (2013) Stub-loaded reactive decoupling network for
two-element array using even–odd analysis. IEEE Antennas Wirel Propag Lett 12:452–455
8. Wu C-H, Chiu C-L, Ma T-G (2016) Very compact fully lumped decoupling network for a
coupled two-element array. IEEE Antennas Wirel Propag Lett 15:158–161
9. Zhao L, Yeung LK, Wu K-L (2014) A coupled resonator decoupling network for two-element
compact antenna arrays in mobile terminals. IEEE Trans Antennas Propag 62:2767–2776
10. See CH, Abd-Alhameed RA, Abidin ZZ, McEwan NJ, Excell PS (2012) Wideband printed
MIMO/diversity monopole antenna for WiFi/WiMAX applications. IEEE Trans Antennas
Propag 60:2028–2035
11. Tang X, Qing X, Chen ZN (2015) Simplification and implementation of decoupling and match-
ing network with port pattern-shaping capability for two closely spaced antennas. IEEE Trans
Antennas Propag 63:3695–3699
12. Su S-W, Lee C-T, Chang F-S (2012) Printed MIMO antenna system using neutralization-line
technique for wireless USB-dongle applications. IEEE Trans Antennas Propag 60:456–463
A Hybrid Alzheimer’s Stage Classifier
by Kernel SVM, MLP Using Texture
and Statistical Features of Brain MRI
1 Introduction
The medical image has more information to analyze, diagnose, and classify the
disease. Radiologist plays a noteworthy part in order to identify the disease using the
medical image. But owing to the heavy workload and visual impairment, the decision
taken by the radiologist may be wrong and it leads to the major problem. Owing to
the urbanization and food habits, people are affected by multiple brain disorders [1].
Incidence rates of common disorders such as epilepsy, stroke, Parkinson’s disease,
and tremors determined through population studies demonstrate extensive variety
crosswise over various locations of the country.
S. Basheera (B)
Acharya Nagarjuna College of Engineering and Technology, Acharya Nagarjuna University,
Guntur, Andhra Pradesh, India
e-mail: basheer_405@rediffmail.com
M. Satya Sai Ram
Department of Electronics and Communication Engineering,
Chalapathi Institute of Engineering and Technology, Guntur, Andhra Pradesh, India
e-mail: msatyasairam1981@gmail.com
Fig. 1 Percentage of deaths of all ages caused in the U.S. between 2000 and 2010 decades
India is the second largest country in population and confronting the problem of
elders, the disease named as Alzheimer’s may affect the person at any time and it
makes the person isolated from the society. As per the U.S. statistics, most of the
deaths are due to the Alzheimer’s. The statistics are shown in Fig. 1 [2].
As per India’s 2011 census, more than 104 million persons are above 60 years
old. They are only 8.2% of the total population, but the number is expected to grow
significantly in the coming decades. The number of persons with dementia doubles
every 5 years. So, in India, many elders will face this problem [3].
Alzheimer’s occurs due to the loss of the tissue and the death of the nerve cell.
Over a time, the brain gets shrunk and affects the entire functionality of the brain.
The affected person fails to think, arrange things, and recollect due to cortex wilts.
The shrinking of hippocampus affects the process of recalling. Initially, Alzheimer’s
symptoms, i.e., the shrinking of hippocampus starts 20 years before the diagnosis of
the primary symptom of failing to learn, think, and plan. As the disease progresses,
the individual may experience the behavioral changes such as recognizing friends
and family members. In the advanced stage, the cortex is totally damaged due to
plague and tangles. The stages of the brain in Alzheimer’s are shown in Fig. 2.
A Hybrid Alzheimer’s Stage Classifier by Kernel … 117
T1-Weighted and T2-Weighted MRI Images are used to analyze the Alzheimer’s
by observing the gray matter, white matter, and CSF of the brain by observing the
shrinkage of the hippocampus, cortex, and widening of the ventricles [4].
This paper is used to design a Hybrid Kernel SVM to classify the Alzheimer’s
stages. The paper is formatted as the introduction to the Alzheimer’s disease. It
explains how it affects urban India. The existing techniques are used to verify the
existing machine learning systems and to make the classification of the data. In the
proposed system, SVM, MLP algorithms are hybridized to generate a new classifier
algorithm and the data is analyzed by using the result and conclusion.
2 Existing Techniques
In order to analyze mild cognitive impairments and normal impairments [5], linear
regressive mechanism is used as a classifier which gives the measurements of the
area under the curve 0.87, sensitivity 0.85, and specificity 0.80 between the mild and
normal dementia persons [6]. Classification of Alzheimer’s disease in Mild Cogni-
tive Impairment analyzes Using Histogram-Based Analysis [7]. Classification of the
Alzheimer’s by is done by removing the nuisance features using linear regression
[8]. Hippocampus size is used to measure the stage of the Alzheimer’s because this
is the first thing affected by Alzheimer’s [9]. The Alzheimer’s evaluation is based
on probabilistic information of the hippocampus volume, Voxel-based morphometry
[10].
3 Proposed Method
For this work, 54 different pre-labeled MRI Slice Images are collected from http://
www.med.harvard.edu/AANLIB. Features extracted from these images are used to
train and validate the Classifiers.
118 S. Basheera and M. Satya Sai Ram
The raw MRI is Enhanced using Histogram equalization. The redundant information
such as the skull, fat and ears are expelled from an image using Skull stripping
process.
To strip the skull convolution filter, K(i, j) of 3 × 3 size kernel with the exponential
Taylors series Eq. (1) is used
x2 x3
ex 1 + x + + + ··· (1)
2! 3!
Figure 4a shows the exponential kernel with x −1, Fig. 4b is the original image,
Fig. 4c is the resultant image of convolution and normalization, the resultant image
passes through the threshold to convert into binary image using Eqs. (2) and (3)
After the threshold, the image is eroded to get Fig. 5a, b. These images are used
as a mask to strip the skull from the Raw MRI. It generates the resultant image as in
Fig. 6.
Skull stripped image is used to segment the image and extract white, gray, and CSF
of the brain shown in Fig. 7a–e. The detailed algorithm is given as
Fig. 7 a Mask of the skull stripped image, b image have WM, GM, c CSF segmented image, d
image have white matter e image with gray matter
120 S. Basheera and M. Satya Sai Ram
Texture of the image gets changed as the disease gets advanced. Gray Level Co-
Occurrence Matrix (GLCM) is used to extract texture features of the brain image.
GLCM is then generated by counting the occurrence of intensity of pixels between
the neighbors with the 0° of orientation often defined as square even matrix.
i and j denote the intensity of the normalized image which generates probability-
based matrix [5] denoted as P(x, y).
From the segmented image, white matter area, gray matter area, and CSF area
are calculated along with GLCM texture features using the R programming with
radiomics package and EBImage package. The collected features are placed into
Excel and converted into .CSV file.
From the GLCM Matrix, the following parameters are calculated for texture [11,
12].
A Hybrid Alzheimer’s Stage Classifier by Kernel … 121
Parameter Formula
Mean in i direction ui iP(i, j) (8)
i j
Mean in j direction uj iP(i, j) (9)
i j
i j (i−uj)P(i,j)
Correlation σi σj (10)
P(i,j)
Homogeneity 1+|i+j| (11)
i j
Energy P2 (i, j) (12)
i j
Entropy − P(i, j)log(p(i, j)) (13)
i j
Standard deviation in i direction (I − μi )2 p(I, J) (14)
i j
2
Standard deviation in j direction I − μj p(I, J) (15)
i j
Angular secondary moment P(i, j)2 (16)
i j
Variance (I − μi )2 p(I, J) + (17)
i j
2
I − μj p(I, J)
i j
3
Cluster shade i + j − μi − μj p(I, J) (18)
i j
4
Cluster prominence i + j − μi − μj p(I, J) (19)
i j
Inertia (i − j)2 p(I, J/i, j) (20)
i j
Dissimilarity (i − j)p(I, J) (21)
i j
The dataset has 24 attributes related to texture and geometrical features. Some of
them are redundant, and they are not useful for classification. Those features are
reduced by using the Principle Component Analysis (PCA).
Let the input data x ∈ {x1 , x2 , . . . , xN } have N data points. Then the Support vec-
tor machine is used for classifying {−1, 1} classes known as binary classification
122 S. Basheera and M. Satya Sai Ram
which carries good clustering accuracy. The linear SVM is not used for multiclass
classification. So, it needs to add kernels to provide the required classification [13].
Let ϕ(xi) be the corresponding vectors in attribute space, where, ϕ(xj) is the
implicit function vector mapping. Let X(xi, xj) ϕ(xi) · ϕ(xj) be the kernel func-
tion, which implies a dot product in the feature space [14].
There are different kernel functions used for multiclass classification X(xi, xj)
called the kernel function [15, 16]. Polynomial kernel is used in this algorithm.
It is a supervised classifier and it has more than three layers. The input data is given to
the first layer. The hidden layer is used to perform the manipulation on the input data
along with the weights and biasing. The result is passed to the activation function
to generate the output. The last layer is used to give the classification based on the
predefined classes.
Let the input layer have inputs of
The inputs are passed to the hidden layer. The node at hidden layer receives the
input data along with their weighted values of {W1, W2, W3 … WN}, the biasing
applied to the hidden layer is B
This output is passed through the activation function. That provides the decision
based on the inputs of the hidden node.
In most of the cases, sigmoid function is used to perform Back Propagation and
also to perform the error minimization using gradient mechanism with the particular
moment.
The main problem in the data set is the attributes collected from the images which are
more or less similar to one another. If the MLP is applied to the data set, it becomes
a binary problem. The binary problem is solved with utmost care using SVM. It is
used to construct a hyperplane between the two classes. The experiment is performed
with twohidden layer MLP followed by SVM.
A Hybrid Alzheimer’s Stage Classifier by Kernel … 123
In this paper, 256 × 256-sized 54 grayscale MRI sliced Images are used to extract
the features. The features are loaded as .CSV file. That data frame is given to PCA
to select the required features to train the classifier.
After training and validation, the confusion matrix is generated from which true
positive, false positive, precision, and recall are calculated.
True Positive: Original data is true and it predicts as positive
False Positive: Originally it is false but it predicts as positive
TP rate
Precision (25)
TP rate + FP Rate
TP rate
Recall (26)
TP rate + FN Rate
Tables 1, 2, and 3 show the comparison of the all three classifiers it is observed
that the Hybrid Kernel SVM gives better than liner Kernal SVM Fig. 8 gives the
Performance analysis in Bar chart.
Fig. 8 Performance analysis comparison of kernel-based SVM, MLP, and hybrid classifier
The proposed system performances with other classifiers are carried by Weka
Knowledge Flow as shown in Fig. 9. Parameters are tabulated to evaluate the perfor-
mance of the classifier as in Table 4 and Fig. 10.
5 Conclusion
Fig. 9 Knowledge flow of the KSVM, MLP, and proposed hybrid kernel SVM classifier
of Alzheimer’s. The primary diagnosis of the disease will help the patient to follow
the necessary diet and take precautions.
126 S. Basheera and M. Satya Sai Ram
References
Abstract This paper proposes a new design for sensing the pressure with high
sensitivity on piezo-resistive MEMS sensing mechanism. This proposed structure is
designed as a single-sided cantilever with two different dimensional structures which
produces a useful result to measure both high and low pressure and the piezo-resistive
material are only applied on the high-stress deflecting area to get more accurate value.
Displacement and the maximum stress of piezoresistive will be simulated and this
design is mainly designed to identify the intracranial changes in the brain with the
help of piezo-resistive layer, we can get the results through the potential variation.
The potential difference is the main advantage for the paper because without any
conversion it is possible to directly connect with the application.
1 Introduction
The pressure inside the skull which is due to the brain tissue and Cerebrospinal
Fluid (CSF) is called as Intra Cranial Pressure (ICP). This pressure is measured in
millimeter of mercury (PmmHg). This pressure is mainly produced by abrupt changes
N. Kalaiyazhagan (B)
Department of Electronics Engineering, Pondicherry Central University,
Kalapet 605014, Puducherry, India
e-mail: kalaiyazhaganece@gmail.com
T. Shanmuganantham (B)
Department of Electronics Engineering, Pondicherry University, Kalapet 605014, Puducherry,
India
e-mail: shanmugananthamster@gmail.com
D. Sindhanaiselvi
Pondicherry Engineering College, Pillaichavadi 605014, Puducherry, India
e-mail: sindhanaiselvi@pec.edu
in intrathoracic pressure coughing, etc. The normal adult will have ICP ranges from
7 to 15 mmHg and 20 to 25 mmHg.
By using piezo-resistive analysis, we can easily calculate sensor sensitivity [1].
In predicting clinical diagnosis before symptoms became apparent was discussed
in [2]. The research on Traumatic Brain Injury (TBI) brought out facts that this
disease is mostly found in the athletes who are mostly in contact with sports (boxing,
football, hockey, etc.) [3]. The evolution of MEMS devices ranging from micrometers
to millimeters gave scope for the vivo monitoring in measuring the PH levels and
oxygen pressure [4, 5]. Capacitive-based sensors which are more sensitive to the
pressure than to the changes of the temperature produces an output signal which is
nonlinear to the input [6].
By using this proposed model as shown in Fig. 1, pressure sensor deflects when a
small pressure is applied on the sensing layer of the beam designed and also it predicts
the frequent change of the ICP which gives more deflection on the cantilever with this
variation the doctor can take safety precautions before it leads to risk to the patient
health.
When the piezo-resistive layer is applied on the cantilever surface, it converts the
mechanical energy into electrical energy and this is used to design on wireless body
area network and there are many risks to apply a current and voltage into the brain.
So, piezo-resistive pressure cantilever measures the ICP without any impact voltage
and current that produces a potential output.
Al2 O3 (Aluminum Oxide) is used as a sensing layer to design the piezo-resistive layer.
Because this material will affect the piezo-resistive layer and generates the potential
output. In the cantilever wherever the pressure is applied, it will be reflected on the
beam edge. Using silicon as substrate gives more physical strength to the cantilever.
Two different dimensions are introduced to detect the high and low level variations
in the ICP of the patient.
Finite element method is a computer-aided system and mathematical technique is
used to obtain an estimated numerical value for abstract equations. That calculates
reaction of a physical method subjected to peripheral influences. Here, the deflection
represents the displacement of the cantilever after applying a pressure on the surface
of sensing layer.
1 w(L + rN )2 w(L + rN )
δ + 5 (2)
3 E 6
AG
Stress is maximum at the edge when the displacement is minimum, then sensitivity
or output voltage is high.
a
S 0.1144π44 (1 − v)( )2 (6)
h
R is the resistance of the material, the R resistance of the cantilever. Where S is
the sensitivity of the cantilever and π44 is the aspect ratio of the cantilever. When
aspect ratio value is increased sensitivity also increased.
With respect to aspect ratio, the dimension is designed. This design has three different
layers of substrate, sensing layer and piezo-resistive layer.
For substrate: Here, silicon is used with the resistance of 2.5 /cm which gives
a difference of piezo-resistive layer. The dimension of 130 × 5 μm, the thickness of
25 μm, and silicon have more special characteristics to improve the sensitivity of
pressure sensor.
For sensing layer: The aluminum dioxide as a sensing material gives a high-stress
variation on the surface of a cantilever which disturbs piezo-resistive layer to produce
high potential differences.
For piezo-resistive layer: Silicon is the best material for the piezo-resistive layer
and also produces potential difference with respect to variation in the stress of sensing
layer. Stress, displacement, sensitivity, and potential difference are shown in Table 4.
Figures 3, 6, Figs. 4, 7, and Figs. 5, 8 show the displacement, potential difference,
and stress missed, respectively. Figures 9, 10, and 11 show the graph of stress,
displacement, and potential difference versus applied pressure.
132 N. Kalaiyazhagan et al.
4 Conclusion
The motive of this design is to fulfill the frequent pressure measurement in the human
brain for ICP. Then here the proposer used pressure ranges from low 2 PmmHg to
high 20 PmmHg. The piezo-resistive potential difference always depends on stress
produced by the cantilever beam. By calculating displacement, the sensitivity is
calculated also along with the applied pressure, wherever the resistance is more,
the potential is more with respect to the stress. The minimum stress of 16 MPa and
the maximum stress of 162 MPa with applied voltage. By this design, the accurate
deflection of ICP is detected and delivered to the user.
MEMS Sensor-Based Cantilever for Intracranial … 135
References
Abstract Nowadays, cantilever beams are used as a medium in the biomedical field
to detect diseases. Basically, a cantilever beam is used to detect only one disease
at a time. In this paper, a beam is designed to detect four diseases at a time. So,
this device is called Quad Bio-MEMS disease detector (QBDD). The IntelliSuite
software is used to find the simulation results for static analysis and for dynamic
analysis, and MATLAB is used to find frequency response. This paper mainly deals
with miniaturization of devices in microscale to be used in the medical field at low
cost and better accuracy.
1 Introduction
A system which consists of both mechanical functions such as heating, sensing, and
electrical functions with microfabrication scale is called Microelectromechanical
Systems (MEMS). If these MEMS are used in biomedical field, then it is called
Bio-MEMS. A cantilever is a long projecting rectangular beam or a girder fixed at
only one end and another end of the beam is free [1]. The surface of the cantilever is
provided with a layer to sense the biomolecule in the sample [2], this layer is called
B. Sujan (B)
Pondicherry Central University, Kalapet 605014, Puducherry, India
e-mail: beulahsujanacademic@gmail.com
T. Shanmuganantham (B)
Department of Electronics Engineering, Pondicherry University, Kalapet 605014, Puducherry,
India
e-mail: shanmugananthamster@gmail.com
D. Sindhanaiselvi
Pondicherry Engineering College, Pillaichavadi 605014, Puducherry, India
e-mail: sindhanaiselvi@pec.edu
Sensing layer. The upper surface of this cantilever beam is coated with sensing layer,
which is further coated with an antibody that reacts with suspected antigen in the
blood sample in patient blood [3]. When these antigens react with this antibody, the
phenomenon of adsorption takes place because of which a surface stress is generated
due to increase in mass and cantilever bends [4].
Microcantilever sensor can be analyzed in two ways;
1. Static analysis.
2. Dynamic analysis.
1. Static analysis
In static analysis displacement, stress, and strain occurred by the effect of load are
calculated [5]. In the static analysis, the cantilever bends due to increase in mass.
2. Dynamic analysis
In dynamic analysis, the natural frequency and damping ratio of the beam are calcu-
lated as there is a shift in frequency due to increase in mass [5]. As the beam mass
increases then the oscillations produced on the beam due to which there is a shift in
frequencies.
In this paper, we detect four diseases can be detected by using a single microsystem
design. The diseases used in the paper for detecting is Wilson disease, Menkes
disease, hemochromatosis disease, blue baby disease. Wilson disease is mainly due
to excess copper, which mainly affects the liver. Menkes disease is mainly due to
deficiency of copper. Hemochromatosis is mainly due to the excess iron. The blue
baby disease is mainly due to excess nitride in the body.
2 Microsystem Design
Microsystems are designed in such a way they are used in the field of biomedical
applications. A microcantilever with four beams is designed to detect four diseases
which are termed as Quad Bio-MEM Disease Detector (QBDD) is shown in Fig. 1.
The antigen weight is converted to pressure and added on the cantilever surface,
where this pressure is sensed by sensing layer on beam. As a part of sensing, stress
is induced.
The microsystem designed upper surface is coated with sensing layer is, to sense
the targeted biomolecule. The left beam is used to detect Wilson disease whose
molecular mass of antigen is 140 kDa is added. As the load is added beam deflects
due to increase in mass. The beam at more altitude is used to detect Menkes disease
whose molecular mass of antigen is 48 kDa is added. As the load is added beam
deflects due to increase in mass. The right beam is used to detect hemochromatosis
disease whose molecular mass of antigen is 100 kDa is added. As load is added beam
deflects due to increase in mass. The beam at lower altitude is used to find blue baby
Quad Bio-MEMS Disease Detector (QBDD) 139
disease whose molecular mass of antigen is 57 kDa is added. As the load is added
to the beam deflects due to increase in mass and it is represented in (Fig. 2).
For sensing, the material used for designing beams is silicon. Silicon has less
electrical and mechanical hysteresis losses compared with germanium and gallium
arsenide. Silicon is used at temperature 350° as thin film wafers are obtained at this
temperature. The material properties used for simulation are presented in Table 1
and MEMS sensor with four cantilever is shown in (Fig. 3).
140 B. Sujan et al.
Frequency Analysis:
The frequency response of microsystem designed is found by using MATLAB at dif-
ferent lengths, widths, and thickness. The mathematical equation used for frequency
calculation is
f 1/(2 ∗ pi)sqrt(k/m)
where
K spring constant
E Young’s modulus
w width,
t thickness,
l length
m d/v
where
d density of Si material,
v volume of cantilever beam
Quad Bio-MEMS Disease Detector (QBDD) 141
Fig. 4 Frequency response of cantilever for different diseases at different lengths. a The frequency
response of cantilever for Wilson disease at different lengths, b the frequency response of cantilever
for Menkes disease at different lengths, c the frequency response of cantilever for hemochromatosis
disease at different lengths, d the frequency response of cantilever for the blue baby disease at
different lengths
The microsystem designed above is varied from length 50 to 150 µm for all the four
beams and the other dimensions of the beam such as width and thickness are kept
constant.
Frequency responses:
In Fig. 4, the frequency responses of all the beams show maximum frequency at 150
µm length.
142 B. Sujan et al.
Fig. 5 Frequency response of cantilever for different diseases at different widths. a The frequency
response of cantilever for Wilson disease at different widths, b the frequency response of cantilever
for Menkes disease at different widths, c the frequency response of cantilever for hemochromatosis
disease at different widths, d the frequency response of cantilever for the blue baby disease at
different widths
The microsystem designed is varied at different width keeping length and thickness
of beam constant.
Frequency response of beam detecting disease at different widths.
In the Fig. 5, the frequency responses of all the beams show maximum frequency
at 150 µm width.
The microsystem is varied at a different thickness where length and width is kept
constant for all the four beams in the microsystem design.
Frequency response of beam detects diseases at different thickness.
Quad Bio-MEMS Disease Detector (QBDD) 143
Fig. 6 Frequency response of cantilever for different diseases at different thickness. a The fre-
quency response of cantilever for Wilson disease at different thickness, b the frequency response
of cantilever for Menkes disease at different thickness, c the frequency response of cantilever for
hemochromatosis disease at different thickness, d the frequency response of cantilever for the blue
baby disease at different thickness
In Fig. 6, the frequency responses of all the beams show maximum frequency at
0.1 µm thickness.
The designed microsystem produces maximum frequency responses at greater
lengths and widths and smaller thickness, i.e., at length 150 µm, width 150 µm, and
thickness 0.1 µm for the dimensions considered.
4 Conclusion
MEMS have a rapid growth in the field of biomedical applications. The designed
microsystem in the paper can detect four diseases at a time with better results com-
pared with traditional methods such as ELISA and PCR. In ELISA method, the plastic
properties get added while testing and give false results and also it takes about 4 or
144 B. Sujan et al.
5 h for the result. Therefore, the proposed Bio-MEMS sensor gives faster results than
these traditional techniques.
References
1. Saeed MA, Khan SM, Rao U (2016) Design, and analysis of capacitance based Bio-MEMS
cantilever sensor for tuberculosis detection. In: IEEE international conference on intelligent
system engineering
2. Murthy KSN, Prasad GRK, Saikiran NLNV, Manoj TVS (2016) Design and simulation of
MEMS biosensor for the detection of tuberculosis. Indian J Sci Technol 9(31)
3. Jain V, Verma S (2013) Design and analysis of MEMS piezoresistive three layers micro
cantilever-based sensor for biosensing applications. Int J Innov Technol Explor Eng 2(5)
4. Chaudary M, Gupta A (2009) Microcantilever-based sensors. Def Sci J 59(6)
5. Frometa NR (2006) Cantilever biosensor methods. Biotechnol Apl J (2006)
Providing Security Towards
the MANETs Based on Chaotic Maps
and Its Performance
Arshad Ahmad Khan Mohammad, Ali Mirza Mahmood
and Srikanth Vemuru
Abstract Mobile ad hoc network (MANET) consists of mobile nodes that commu-
nicate with one another through radio communication channel. This wireless channel
is vulnerable to security attacks. Thus, MANET needs a perfect security mechanism
to secure network from security attacks. In literature, different security mechanisms
have been designed to solve the security issues via cryptographic techniques. Secu-
rity mechanisms should not cause overhead to MANET in terms of computation
and storage, as this network is resource constrained. Thus, in this work, we com-
pare the performance of cryptographic solutions that designed for MANET based
on RSA and Chaotic maps. Performance results show that RSA is one of the best
cryptographic algorithms to provide security, but its time complexity is more than
the Chaotic Maps-based cryptography technique. Moreover, time complexity causes
a negative impact on overall network performance; particularly end-to-end delay.
We conclude from our work, Chaotic Maps-based cryptographic technique is well
alternative to RSA, with less overhead and appropriate security.
1 Introduction
, and constrained resources. Thus, network layer functionality must be carried out by
nodes to provide end-to-end communication. Due to its characteristics such as self-
organization, autonomous and self-maintenance makes MANET [2] well worthy of
a circumstance where infrastructure is difficult to organize and time/cost implicit.
However, these characteristics make MANET vulnerable to the security attack.
The number of the security solutions is provided in literature to mitigate attacks
[3] based on different cryptographic techniques [4]. The goal of security mechanism
is to provide authentication, authorization, information integrity and non-repudiation
in the network. Cryptographic techniques are used to address the security goals by
the process of converting the plaintext into ciphertext with the help of a suitable key.
These techniques’ security strength directly depends on key management, key distri-
bution, and key maintenance. However, cryptographic technique operation consumes
network resources such as processor, memory, and energy. If resources consumed
by the cryptographic technique are more than network handling capacity, then it
negatively impacts on network performance. Moreover, if the resources present in a
network are constrained, then the impact of the cryptographic technique is more on
network performance. Thus, while developing security mechanism in MANET based
on cryptographic technique, one should take care of the overhead of that technique.
In this work, we implement the RSA and chaotic maps-based security mechanism
in MANET environment and compare their performance with respect to different
network performance metrics. Remaining paper is organized as follows, and in the
next section, we discussed RSA and Chaotic Maps methodology. Performance of
RSA and Chaotic Maps are calculated in Sect. 4, discussion about the results is
presented in Sect. 5, and our work end with the conclusion.
In order to achieve security in communication and protect the data from intruders,
in the literature different security mechanisms have been proposed based on differ-
ent cryptographic techniques. Terminology behind the cryptography is, “a plaintext
document is encrypted with a ciphertext to produce ciphertext document known as
encryption and decryption is the reverse process of encryption, a cipher may utilize
one or multiple keys”. The modern cryptographic system mainly divided into two
forms, known as a private key and public key cryptography. Public key cryptography
uses different keys to encrypt and decrypt the message, whereas private key cryptog-
raphy uses an identical key to encrypt and decrypt the message. In this work, we are
not considering private key encryption, as it is computationally overhead in compari-
son with public key encryption. Public key designed based on specific cryptographic
standards such as “trusted” authorities, the specific implementation of algorithms
or/and protocols (including key sizes), generation of seeds and random numbers,
parameters of algorithms, and assurance of hardware and/or software support.
The strength of any cryptographic technique greatly depends on its key calcu-
lation, key distribution, and key maintenance procedure. However, cryptanalysis is
Providing Security Towards the MANETs Based … 147
the process of analyzing and breaking the ciphertexts or specific codes. It is used
to breach cryptography security system and get access to ciphertext, with the help
of weakness of system implementation. According to Kerckhoff’s principle [5], “A
cryptographic system must be secure even if everything regarding the system, expect
the key, is knowledge of public”. Public key cryptography strength depends on its
key management, as only “key” information is unknown to intruder or attacker. In
this work, we are considering the public key cryptographic technique such as RSA
and Chaotic Maps-based key agreement technique.
RSA [6, 7], is an asymmetric cryptographic technique and it uses two different
keys for providing security. This technique provides the method to generate keys
(public, private) and encryption and decryption of the message. Message encryption
happens by the public key of receiver and decryption happens by the private key of
the sender. Private key is kept secret, while public key distributed publicly. Thus,
this technique is known as public key cryptography. It generates the key is based on
factorial multiplication of two large prime values. The fundamental concept behind
RSA is the examination that it is conceivable to determine three very big positive
integers “e”, “d”, and n such that with modular exponentiation for all m, and it is
explained by the following Eq. (1).
e d
m (m)mod(n) (1)
Even getting the values of “m” and “e” or even “n”, it can be exceedingly incredible
to calculate the value of “d”. Moreover, for some calculations, it is convenient that
the order of the two exponentiations can be changed and that relation also implies
by following Eq. (2):
d e
m (m)mod(n) (2)
In our previous work [9], we use the Chebyshev polynomial’s semigroup property
[11] to provide authentication between communicating entities, which shown in
below Eq. (8).
where N is a big prime number and X ∈ (−∞, +∞), in Eq. (8), it is incredible to
compute the value of “n” with given values of Tn (x), X, N and this property is known
as Chaotic Maps depended on Discrete Logarithm problem.
Further “Chaotic Maps Based Diffie Hellman problem” property state that in
given Eq. (9), it is incredible to compute the value of “Tnm (X)” with given values
of Tn (x), X, N and Tm (X) and this property known as Chaotic Maps depended on
Discrete Logarithm problem.
We have evaluated the performance of RSA and Chaotic Maps in a static identical
environment, with processor dual-core 2.33 GHz processor, 2 GB DDR2 RAM,
160 GB Hard Disk Capacity. We varied the prime number size up to 1024 bits, and
evaluated the computational time of each algorithm. It is very much clear from our
analysis that chaotic maps computational capacity is very much less in comparison
with RSA as shown in Fig. 1. Computational time affects the network performance
by increasing the end-to-end delay, consumes energy, and occupies buffer space.
In this section, we implement the RSA and Chaotic maps-based key agreement
between communicating entities above the underlying routing algorithms designed
for MANET and evaluated the performance of the network with respect to end-to-
end delay. As computation time of algorithm directly affects the end-to-end delay of
Providing Security Towards the MANETs Based … 149
5 Results Discussion
MANET is a peer-peer network, where nodes should perform the routing task. Thus
a load of every node is high as it is doing the task of the router, means it is receiving
the packets, making the decision of packet based on protocol, and forwarding the
packet, which requires the time, energy and memory of the node to accomplish.
However, providing security is a most challenging task and much desirable due to its
characteristics, but it must be minimum overhead in terms of network performance.
Authentication is the easy, convenient way to achieve security in MANET. Thus, one
can provide authentication by the use of RSA and Chaotic Maps by authenticated key
agreement between communication entities. Thus, we have evaluated the RSA and
Chaotic Maps-based key agreement in MANET environment. It is clear from Figs. 2,
3 and 4 that the overhead of RSA is more than Chaotic maps key agreement protocol.
Computational time of Chaotic Maps is less in comparison with RSA algorithm key
agreement. From our results, we demonstrate that in order to provide security in
MANET Chaotic maps is best suitable for replacement of RSA. In security point
Fig. 2 AODV end-to-end delay performance under RSA and chaotic map
Providing Security Towards the MANETs Based … 151
Fig. 3 AOMDV end-to-end delay performance under RSA and chaotic map
Fig. 4 DSR end-to-end delay performance under RSA and chaotic maps
of view, the work [15] concludes that adversary cannot compute the chaotic maps
authentication key in polynomial time.
6 Conclusion
References
1. Bolster A, Marshall A (2016) Analytical metric weight generation for multi-domain trust
in autonomous underwater MANETs. In: 2016 IEEE third underwater communications and
networking conference (UComms). IEEE
2. Loo J, Mauri JL, Ortiz JH (eds) Mobile ad hoc networks: current status and future trends. CRC
Press
3. Prashar L, Kapur RK (2016) Performance analysis of routing protocols under different types of
attacks in MANETs. In: 2016 5th international conference on reliability, infocom technologies
and optimization (trends and future directions) (ICRITO). IEEE
4. Stallings W (2006) Cryptography and network security: principles and practices. Pearson Edu-
cation India
5. Petitcolas FAP, Anderson RJ, Kuhn MG (1999) Information hiding—a survey. In: Proceedings
of the IEEE 87.7, pp 1062–1078
6. Mustafi K et al (2016) A novel approach to enhance the security dimension of RSA algorithm
using bijective function. In: 2016 IEEE 7th annual information technology, electronics and
mobile communication conference (IEMCON). IEEE
7. MacKenzie P, Patel S, Swaminathan R (2000) Password-authenticated key exchange based on
RSA. In: International conference on the theory and application of cryptology and information
security. Springer, Berlin, Heidelberg
8. Mason JC, Handscomb DC (2002) Chebyshev polynomials. CRC Press
9. Mohammad AAK, Mirza A, Vemuru S (2016) Cluster based mutual authenticated key agree-
ment based on chaotic maps for mobile ad hoc networks. Indian J Sci Technol 9(26)
10. Tao Y (2016) An authentication scheme for multi-server environments based on chaotic maps.
Int J Electron Secur Digit Forensics 8(3):250–261
11. Datko R (1970) Extending a theorem of AM Liapunov to Hilbert space. J Math Anal Appl
32(3):610–616
12. Belkneni M et al (2016) Network layer benchmarking: investigation of AODV dependability.
In: International symposium on computer and information sciences. Springer International
Publishing
13. Varshney A, Maheshwari P (2016) Comparative study of AODV and AOMDV routing protocol.
Int J Control Theory Appl 9(6)
14. Brocki BC et al (2016) Postoperative inspiratory muscle training in addition to breathing
exercises and early mobilization improves oxygenation in high-risk patients after lung cancer
surgery: a randomized controlled trial. Eur J Cardio-Thoracic Surg 49(5):1483–1491
15. Zhu H (2015) Flexible and password-authenticated key agreement scheme based on chaotic
maps for multiple servers to server architecture. Wirel Pers Commun 82(3):1697–1718
Efficient Precharge-Free CAM
Match-Line Architecture Design for Low
Power
1 Introduction
applications like image processing [3], gray coding [4], IP routing [5], and so on. On
each search due to parallel comparison more match lines (MLs) with large capaci-
tance is active and a huge amount of power is consuming in MLs [6, 7]. In addition,
the short circuit current path in precharge phasealso leads to high power consump-
tion [10]. The challenge of the research is to reduce power without degrading the
performance. Conventionally, ML architectures are of two types NAND-type and
NOR-type ML architectures. NAND-type ML architecture consumes low power but
with large search delay because of their cells connected in series and it also has a
charge sharing problem. NOR-type ML architecture offers less search delay but con-
sumes high power because of their cells connected in parallel. Even in the worst case,
search delay of NOR-type ML is better when compared to NAND-type ML. Gener-
ally, a design is done without sacrificing their performance. Thus, when designing
CAM architecture, NOR ML architecture is preferred over NAND ML architecture.
The remaining flow of this paper is planned as follows. Section 2 review the tradi-
tional CAM organization with NOR CAM and master-slave CAM ML architecture.
Section 3 explains the proposed precharge-free CAM ML architecture. Section 4
compares results between conventional CAM designs and proposed CAM design.
Section 5 concludes the proposed design.
2 Content-Addressable Memory
The array of CAM cells are arranged in rows and columns to form CAM ML archi-
tecture [8]. Each row is considered as one word with separate ML and precharge
circuitry. Typical NOR CAM cell is as shown in Fig. 1. CAM cell performs two
functions such as storing data and compares the stored data. Storing of data may
be designed with volatile or nonvolatile memory. In this design, the storage cell is
built with volatile 6T static random access memory (SRAM), where storage latch
is formed by two cross-coupled inverters. Comparison transistor is a pass transistor
logic circuitry connected to the storage cell which compares the stored data from a
given search input data. Depending on the type of application, NOR comparison part
can be modified as XNOR or XOR logic. A pull down transistor Pd is connected
between the compare unit and the storage unit which is controlled by a gate Pd . The
transistor Pd is used to disconnect or connect the ML to ground.
NOR-type CAM cells are arranged in parallel to form a NOR CAM ML architecture
is as shown in Fig. 2. Match-line architecture consists of an array of NOR CAM cells,
ML, and a precharge circuitry. Searching for stored data in a CAM is performed only
after the precharge phase. CAM operation starts by precharge phase followed by
evaluation phase [9]. Precharge phase starts by setting pre signal to low which makes
pMOS on, then ML output is high irrespective of search input data and stored data.
Evaluation phase starts by setting pre signal high which makes pMOS off, then ML
output depends on the search input data and the stored data. In NOR-type evaluation
phase when comparing the search input data with the stored data, mismatch indicates
low and match indicates high. Power consumption in NOR ML architecture is given
by Eq. 1 [10]. The timing waveform of NOR ML architecture is shown in Fig.
3. Equation 2 gives the total time required for completing one operation, this is a
common equation for all precharge-based architecture. Table 1 shows that truth table
of match/miss case for NOR CAM cell.
Power of NOR ML is given by
αn switching activit y
C M Ln match line capacitance
VD Dn supply.
where
The master-slave ML architecture consists of one master match line (MML) and a
slave match line (SML) for a single word. The main purpose of this design is to share
the charge between master ML and slave ML when there is a mismatch occurs in the
evaluation phase. This is a charge refill minimization technique which minimizes ML
switching results reduction in the power [11]. Figure 4 shows that MSML architecture
for MS1 which indicates one master ML and one slave ML. Other than MML and
SML additionally FML is used to represent the final match output. Similar to NOR
CAM, this design works in two phases: precharge and evaluation phase. In precharge
phase, both FML and MML are precharge to high through control signal pre. In this
phase, there is no charge sharing path between the MML and SML, hence SML is
discharged to zero. Here, ML output is high irrespective of stored data and search
data. In evaluation phase control signal pre is changed to low then FML and MML
outputs depend on the search input data loaded and the stored data in the memory.
Depending on miss/match result, SML has two possible cases. When there is a match,
no charge path exists between the MML, SML then the output of MML, FML retain
their previous state. When there is a miss-match, SML shares path with the MML and
charge is distributed between these two. The charge refill swing (CRS) in this case is
given by CRS = VD D − V f which is lower than the normal swing. After full charge,
sharing both MLs reach to same voltage level. According to [11], charge sharing for
final balancing voltage is given by the following Eq. 3. Table 2 shows match/miss
output for master-slave architecture. Timing waveform of match and miss case for
the master-slave ML architecture is shown in Fig. 5.
Efficient Precharge-Free CAM Match-Line Architecture Design for Low Power 157
4 Comparison Results
In this paper along with conventional CAM designs, the proposed CAM design of size
1 (word) × 8 (bits) ML architectures were implemented on GPDK 45-nm technology
node CMOS process with a supply voltage of 1V and simulation is performed on
Efficient Precharge-Free CAM Match-Line Architecture Design for Low Power 159
5 Conclusion
Low power efficient precharge-free CAM design is presented in this paper. The
proposed design reduces power by removing precharge phase and avoiding short
circuit current path. The CAM architectures discussed in this paper are simulated in
Virtuoso tool 45-nm technology node CMOS process with a supply voltage of 1 V.
The simulation results show that the proposed design gives better results with reduced
power consumption at different miss/match cases when compared to conventional
CAM architectural design like NOR CAM ML and master-slave CAM ML. Thus, the
proposed design is best suitable for designing low power memory design application
with longer word length.
References
1. Cai Z et al (2013) A distributed TCAM coprocessor architecture for integrated longest prefix
matching, policy filtering, and content filtering. IEEE Trans Comput 62(3):417–427
2. Arsovski I, Ali S (2003) A current-saving match-line sensing scheme for content-addressable
memories. In: 2003 IEEE international solid-state circuits conference (ISSCC). Digest of tech-
nical papers. IEEE
3. Shin Y-C et al. (1992) A special-purpose content addressable memory chip for real-time image
processing. IEEE J Solid-State Circuits 27(5):737–744
4. Bremler-Barr A, Hendler D (2012) Space-efficient TCAM-based classification using gray cod-
ing. IEEE Trans Comput 61(1):18–30
5. Maurya SK, Clark LT (2011) A dynamic longest prefix matching content addressable memory
for IP routing. IEEE Trans Very Large Scale Integr (VLSI) Syst 19(6):963–972
6. Noda H et al (2005) A cost-efficient high-performance dynamic TCAM with pipelined hierar-
chical searching and shift redundancy architecture. IEEE J Solid-State Circuits 40(1):245–253
7. Agrawal B, Sherwood T (2008) Ternary CAM power and delay model: extensions and uses.
IEEE Trans Very Large Scale Integr (VLSI) Syst 16(5):554–564
8. Schultz KJ (1997) Content-addressable memory core cells a survey. Integr VLSI J 23(2):171–
188
9. Pagiamtzis K, Sheikholeslami A (2006) Content-addressable memory (CAM) circuits and
architectures: a tutorial and survey. IEEE J Solid-State Circuits 41(3):712–727
10. Kittur HM (2016) Precharge-free, low-power content-addressable memory. IEEE Trans Very
Large Scale Integr (VLSI) Syst 24(8):2614–2621
11. Chang Y-J, Wu T-C (2015) Master-slave match line design for low-power content-addressable
memory. IEEE Trans Very Large Scale Integr (VLSI) Syst 23(9):1740–1749
12. Mahendra TV, Sandeep M, Anup D (2017) Self-controlled high-performance precharge-free
content-addressable memory. IEEE Trans Very Large Scale Integr (VLSI) Syst
Improvement in Performance of Ternary
Sequence Using Binary Step Size LMS
Algorithm
K. Renu (B)
Department of ECE, GITAM (Deemed to be University),
Visakhapatnam, Andhra Pradesh, India
e-mail: renuengg12@gmail.com
P. Rajesh Kumar
Department of ECE, Andhra University, Visakhapatnam, Andhra Pradesh, India
e-mail: rajeshauce@gmail.com
1 Introduction
To overcome the practical problems of increasing the radar range with the desired
range resolution and accuracy, pulse compression has been used in many radar and
navigation systems. It helps to detect targets over long ranges using short pulses and
requires high peak power to obtain large pulse energy [1]. The detection capability
of radar is improved by transmitting long-coded pulse while the capability of high-
range resolution is retained when the received echo is processed to obtain a narrow
pulse. In order to achieve the maximum value of the signal-to-noise ratio at the
receiver, a matched filter is used. The theory of pulse compression deals with the
code that modulates the carrier during transmission which is considered as a reference
signal at the front end of the receiver. At the receiver side, this reference signal is
combined with the received signal. A binary code has two phases 0° or 180° whereas
ternary phase-coded waveform has three phases such as 0° or 90° or 180°. Similarly,
a polyphase code has number of levels [2]. Even though the binary sequences are
easily generated and processed, the drawback appears with longer length sequence
which has lower peak sidelobe level. The ratio of sidelobe maximum to the peak of the
main lobe is called the peak sidelobe ratio which is measured from the autocorrelation
pattern, obtained from the output of the matched filter. It is expressed in decibels.
Many applications require low PSLR which was obtained with longer length binary
codes Boehmer [3], Rao and Reddy [4] and Linder [5]. But it is not reduced much.
Hence it is required to switch from binary sequence to multilevel sequences such
as ternary sequences and quinquinas sequences. In this paper, chaotic sequences are
used whose properties were studied at different lengths [6–8]. These ternary chaotic
sequences are generated from different chaotic map equations. The generation of
number of sequences is infinite which are obtained either by varying the initial
conditions or threshold levels that exist for very large lengths [9]. The performance of
these ternary sequences is studied by using BSSLMS algorithm which gives superior
performance compared to the performance that is obtained by using LMS algorithm.
The decrease in range sidelobe level results in minimum peak sidelobe that has been
a subject of interest and where the lots of research work is going on.
This paper is organized into the following sections. Section 2 explains how to gen-
erate the chaotic ternary sequence. Section 3 presents the adaptive filtering technique.
Section 4 reports the choice and advantages of the proposed algorithm. The design
implementation of the preferred algorithm is discussed in Sect. 5. The simulation
results are presented in Sect. 6 and compared with previous results.
In this paper, the analysis is carried out with the help of chaotic sequence. As these
sequences provide similar properties such as autocorrelation and cross-correlation
property to those of random white noise that is used for radar and spread spectrum
Improvement in Performance of Ternary Sequence Using Binary … 163
systems. There are different types of chaotic sequences that depend on the chaotic
map equation which is clearly presented in the Table 1.
Out of the above three equations, logistic map equation is very simple for which
the value of bifurcation factor μ is ranging from 0 to 4 and that of the initial value of x
is selected between 0 and 1. The chaotic behavior of logistic map exhibits when μ
4 and beyond 4 the value of x diverges. Similarly, the chaotic behavior of improved
logistic map exhibits when the initial value is in between −1 and 1. However, there
is a drastic change of the output waveform for a small change in initial condition.
But the value of xn becomes infinity when x0 > 1.
The procedure to generate sequences is described below:
• Select the bifurcation parameter as 4 to retain the map in a chaotic region.
• All the chaotic maps are sensitive to initial values. So it is required to select the
initial value of xn in the proper range.
• Different raw sequences are generated based on different initial values using var-
ious map equations.
• By proper selection of two threshold levels a and b, the raw sequence is quantized
into three defined levels.
• These threshold levels were randomly selected.
• The generation of the ternary sequence is explained below.
S −1 if xn < a
0 if a < xn < b
1 if xn > b
3 Adaptive Filtering
where w(n + 1) in Eq. (4) represents the updated coefficient values for the next interval
of time.
Improvement in Performance of Ternary Sequence Using Binary … 165
4 Related Work
LMS algorithm is very easy to implement [10, 11]. Because of this, it is used in var-
ious fields like signal processing, mobile communication [12, 13], adaptive antenna
arrays, etc., [14]. But the main drawback of this algorithm is its slow convergence
rate [10, 14]. There is a major conflict between step size parameter and convergence.
Small step size reduces the mis-adjustment whereas a large step size causes fast con-
vergence. In the present paper binary or two variable step size, LMS is preferred that
is reported [15]. There are two methods of addressing the convergence issue. One is
time domain method and another one is transformed domain method. Variable step
size LMS algorithm is one of the time domain approaches where step size parameter
is dependent.
In this method, two step sizes are measured from delta and deviation. During the
process of updating the filter weights, when the difference between desired output
and the output from adaptive filters increases from the previous value of the error, the
step size is updated as delta + deviation. When it decreases from past value of error
then the step size is updated as delta − deviation [15]. By doing this the convergence
also becomes fast. Therefore, Step size (μ) delta ± deviation.
166 K. Renu and P. Rajesh Kumar
6 Simulation Results
Tables 2, 3 and 4 shows the performance of Logistic Map, Improved Logistic Map
and Cubic Map without implementing LMS, with LMS and with BSS LMS algorithm
in terms of peak sidelobe ratio and autocorrelation sidelobe peak of respectively. In
this paper, the comparison is analyzed with the value of delta chosen as 0.09 and
deviation as 0.004.
The value of PSLR and ASP is improved by using BSSLMS algorithm compared
to LMS. From Figs. 1 and 2 it is very clear that the after implementing LMS algorithm
PSLR value is more negative, i.e., very low. And further improvement in the perfor-
mance of PSLR is obtained by using binary step size LMS. The results obtained are
compared with the results that are reported [9]. It is observed that the value of PSLR
for Logistic Map sequence is −23.5336 and with LMS algorithm it is improved to
−25.3883. But with the implementation of binary step size LMS the PSLR is further
improved to −26.3558.
It is observed from Tables 1, 2 and 3 that the performance of Improved Logistic
sequence is better than logistic map sequence and cubic map sequence. The main
cause of this improved results are due to decrease in autocorrelation sidelobe peak.
Table 2 PSLR values of logistic map sequence with LMS and BSSLMS
Seq Logistic map Maximum Maximum
length MSE MSE
with with
LMS BSSLMS
Peak sidelobe ratio (dB) Autocorrelation sidelobe
peak
Without With With Without With With
LMS LMS BSSLMS LMS LMS BSSLMS
20 −17.5012 −25.7712 −42.6279 0.1333 0.0515 0.0074 1.0975 0.8867
40 −17.7860 −21.2716 −21.6542 0.129 0.0864 0.0827 0.7760 0.5191
50 −18.8402 −22.2532 −21.1856 0.1143 0.0772 0.0872 0.4778 0.3640
70 −18.5884 −18.7603 −20.7469 0.1176 0.1153 0.0918 0.4394 0.2926
90 −18.4597 −20.6247 −21.4026 0.1194 0.0931 0.0851 0.3383 0.2265
100 −18.5314 −20.8866 −21.6672 0.1184 0.0903 0.0825 0.3250 0.2082
300 −20.5993 −22.0398 −23.3156 0.0933 0.0791 0.0683 0.1077 0.0690
500 −21.8451 −23.2414 −25.0431 0.0809 0.0689 0.0560 0.0631 0.0410
700 −22.7244 −24.4511 −25.9211 0.0731 0.0599 0.0506 0.0459 0.0294
1000 −23.5336 −25.3883 −26.3558 0.0666 0.0538 0.0482 0.0329 0.0208
2000 −26.0917 −27.4281 −28.8808 0.0496 0.0425 0.0360 0.0163 0.0103
3000 −27.3804 −28.2946 −29.8452 0.0428 0.0385 0.0322 0.0111 0.0069
4000 −27.9588 −28.7488 −30.1629 0.0400 0.0365 0.0310 0.0084 0.0052
5000 −28.9338 −29.5468 −30.6066 0.0358 0.0333 0.0295 0.0066 0.0042
Improvement in Performance of Ternary Sequence Using Binary … 167
Table 3 PSLR values of improved logistic map sequence with LMS and BSSLMS
Seq Improved logistic map Maximum Maximum
length MSE MSE
with with
LMS BSSLMS
Peak sidelobe ratio (dB) Autocorrelation sidelobe
peak
Without With With Without With With
LMS LMS BSSLMS LMS LMS BSSLMS
20 −20.8279 −25.6701 −29.8238 0.0909 0.0521 0.0323 1.0892 0.8672
40 −18.5884 −21.4702 −27.3875 0.1176 0.0844 0.0427 0.6652 0.4884
50 −18.4164 −19.9979 −20.5865 0.1200 0.1000 0.0935 0.5677 0.3999
70 −18.5884 −20.0732 −18.5884 0.1176 0.0992 0.1176 0.4322 0.2920
80 −19.0849 −19.5242 −19.0849 0.1111 0.1056 0.1111 0.3515 0.2478
90 −18.6900 −20.5679 −21.5512 0.1163 0.0937 0.0836 0.3349 0.2257
100 −19.0849 −20.0810 −21.0857 0.1111 0.0991 0.0883 0.2972 0.2010
300 −20.9399 −21.7370 −23.5355 0.0897 0.0819 0.0666 0.1040 0.0681
500 −22.1371 −22.9685 −25.2126 0.0782 0.0711 0.0549 0.0674 0.0419
700 −22.8024 −24.1078 −24.5917 0.0724 0.0623 0.0589 0.0469 0.0296
1000 −24.4089 −26.4352 −27.7858 0.0602 0.0477 0.0408 0.0321 0.0206
2000 −26.1176 −27.0794 −28.3376 0.0494 0.0443 0.0383 0.0165 0.0104
3000 −27.2231 −29.0352 −30.4296 0.0435 0.0353 0.0301 0.0112 0.0070
4000 −28.1748 −28.9364 −30.4332 0.0390 0.0357 0.0301 0.0084 0.0052
5000 −29.0712 −29.6382 −31.3600 0.0352 0.0330 0.0270 0.0067 0.0042
One of the interesting properties of LMS algorithm is mean square error which is
also examined in this paper and compared for a different length of the sequences with
LMS and BSSLMS algorithm. The value of the mean square error is obtained from
the computation of the different length of the sequences with 50 number of iterations.
The maximum value of the mean square error is also compared and tabulated. Figure 3
shows the plot of MSE of the logistic sequence using BSSLMS for the length 1000.
From the table, It is very clear that the maximum value of mean square error for the
sequence length 1000 after implementing binary step size LMS is 0.0208 and is less
than 0.0329 which is obtained by using LMS algorithm. And also it is observed that
as the length of the sequence increased the maximum value of MSE is decreased.
168 K. Renu and P. Rajesh Kumar
Table 4 PSLR values of cubic map sequence with LMS and BSSLMS
Seq Cubic map Maximum Maximum
length MSE MSE
with with
LMS BSSLMS
Peak sidelobe ratio (dB) Autocorrelation sidelobe
peak
Without With With Without With With
LMS LMS BSSLMS LMS LMS BSSLMS
20 −20.8279 −30.9226 −44.7795 0.0909 0.0284 0.0058 1.1653 0.9181
40 −18.0618 −20.0282 −23.5960 0.1250 0.0997 0.0661 0.6993 0.5002
50 −20.0000 −21.7463 −22.3400 0.1000 0.0818 0.0764 0.5000 0.3724
70 −18.2763 −19.3356 −19.6396 0.1220 0.1079 0.1042 0.4195 0.2873
90 −18.7570 −19.7470 −19.7087 0.1154 0.1030 0.1034 0.3260 0.2226
100 −18.7152 −21.2510 −21.8584 0.1159 0.0866 0.0807 0.3026 0.2030
300 −21.3079 −24.2402 −24.1505 0.0860 0.0614 0.0620 0.1044 0.0681
500 −22.1039 −24.3158 −25.4822 0.0785 0.0608 0.0532 0.0635 0.0410
700 −22.7568 −25.0277 −25.5250 0.0728 0.0561 0.0529 0.0456 0.0294
1000 −23.8407 −25.1673 −26.9599 0.0643 0.0552 0.0449 0.0333 0.0208
2000 −25.6739 −26.0207 −27.6451 0.0520 0.0500 0.0415 0.0166 0.0104
3000 −27.0651 −27.3001 −28.7057 0.0443 0.0432 0.0367 0.0112 0.0070
4000 −27.8147 −28.1074 −29.5489 0.0407 0.0393 0.0333 0.0085 0.0052
5000 −28.3786 −28.4439 −30.0104 0.0381 0.0378 0.0316 0.0068 0.0042
7 Conclusion
The performance of optimum ternary chaotic sequences with the help of peak side-
lobe ratio and autocorrelation sidelobe peak is compared for the different length of the
sequences with the proposed algorithm. Better results were obtained with least mean
square algorithm in adaptive filtering. But a further improvement in these perfor-
mances is achieved with binary step size least mean square algorithm. The improved
Improvement in Performance of Ternary Sequence Using Binary … 169
Fig. 3 MSE of logistic map Plot of Mean Square Error of Logistic Sequence
sequence of length 1000 0.025
Mean Square Error
0.02
0.015
0.01
0.005
0
0 5 10 15 20 25 30 35 40 45 50
Number of iterations
References
Abstract The concept of the Internet of Things (IoT) enables our common devices
and gadgets to get interconnected in order to exchange information and acts according
to the environmental stimuli. These are called smart IoT devices or motes that consist
of sensors and actuators. When these IoT motes are connected to the Internet, the
user can control them remotely from any part of the world. By using the benefits of
IoT devices, remote patient monitoring can be implemented to monitor the pregnant
women in rural areas. Especially the blood pressure of the maternal women can be
remotely monitored and diagnosed with the preeclampsia. The usage of IoT devices
such as smart BP monitoring devices can send the sensed information over the air so
that the doctors can receive, store and process them for the prediction of abnormalities
in real time. Moreover, in the case of smart IoT devices which are battery operated,
the energy is a crucial factor to be considered. Most of the medical gadgets depend
on battery sources; the power conservation by the nodes is at prime focus. Since it
also involves crucial time bounded data to be transmitted, the delay incurred for data
transmission, in terms of round trip time (RTT) is at next to be measured. In this
work, two IoT based routing protocols (CoAP and 6LOWPAN) have been simulated
for evaluating power and delay for a defined network environment under study. The
inference of the performance of the network is evaluated on par with the IoT-based
remote patient monitoring system.
1 Introduction
IoT is a network that connects and interacts with the physical objects. This Interaction
with the physical objects requires smart features that encompass various communi-
cation protocols designed for the particular application domain [1]. In the current
scenario monitoring of patients who are suffering from chronic ailment has become
a challenging task. The patients’ condition keeps on varying from time to time and
cannot be predicted. Therefore, a location-free continuous medical supervision is
essential in order to avoid the death rate due to lack of timely medical assistance.
The network formed by the smart IoT medical gadgets provides the information
regarding the physiological condition of the patents on the go. These IoT-based
medical devices are usually in two forms such as wearable and implantable. These
devices sense, collect, store and report the data to control center whenever there
arises any abnormality [2].
These types of devices form a specialized network called Body-area networks
(BAN) that can collect information about an individual’s health, fitness, and energy
status. This network enables the user to connect light-weighted, small-sized, ultra-
low-powered, and wearable smart sensors which continuously monitor the human’s
physiological conditions and actions. These devices can communicate wirelessly
using the wireless communication technologies such as Bluetooth, ZigBee, WiFi,
and another compatible communication standard. This requires the support of best-
routing protocols to efficiently transfer sensed in a faster way.
When the blood pressure is persistent over or above 140/90 mmHg may result in
hypertension. The sustained hypertension over a period of time is a major risk factor
for heart and its associated ailments. The Gestational hypertension or Pregnancy-
Induced hypertension (PIH) is the development of hypertension in a pregnant woman
after 20 weeks of gestation without the lack of protein in the urine or other signs of
preeclampsia.
This paper focuses on customization of routing protocols for enabling better com-
munication of sensed blood pressure information using IoT technology so that PIH
can be monitored and treated effectively in order to save patients from any kind of
criticality. The paper is organized as follows: The introduction is given in Sect. 1.
Section 2 gives a brief summary of literature survey. In Sect. 3 the related work has
been discussed. In Sect. 4, the simulation setup and the way of implementation have
been clearly explained. The results analysis has been presented in Sect. 5 and the
study is concluded in Sect. 6.
Analysis of 6LOWPAN and CoAP Protocols for Maternal … 173
2 Literature Survey
Zhang et al. has proposed a system for collecting the present and the future health
informatica of China [3]. The system has been designed for assisting living residents
of China. However, it is highly beneficial for others by continuous remote health
monitoring process. The data has been collected for building informatics repositories
which are essential for medical data analysis. These data have been collected on par
with the guidelines of Chinese medical association. However there is a problem
of sharing these sensitive information without any standards. So standardization of
these data and its inter-operability is a another potential area to be addressed.
Orlando et al. have proposed a practical implementation of an application layer
level protocol CoAP (Constrained Application protocol) for low-power devices for
patient monitoring [4], and it supports the integration of the protocol with the Internet.
However, the system proposed uses CoAP protocol, the focus of the paper lies only
on the simple data transfer functionalities. Yuce has proposed 6LoWPAN (IPv6 over
Low-power Wireless Personal Area Networks) protocol based U-healthcare platform
[5], which enables a remote monitoring of patient health status in real time, and the
provision of feedback and remote action by providers are included. The system has
used temperature and ECG sensor nodes mainly. This proposed system allows only
the online streaming because of using 3G/4G or WiFi connectivity, so it works where
the Internet connectivity is good and in the emergency condition as well.
Clifton et al. have proposed wireless healthcare monitoring system using a mobile
device physiological condition of a patient on the go [6]. The medical assistance can
be given to the patients based on this monitored data. Parane et al. have proposed
IoT remote healthcare monitoring system [7, 8], which provides the patient’s vital
parameters through a web browser. It also describes the steps for connecting 6LoW-
PAN with outside world and internet. Also, it mainly focused on implementation of
CoAP protocol on Mozilla Firefox. However, the security aspects have to be taken
into consideration.
3 Related Works
aspects and the observed results are used to suggest the protocol that is suitable for
the environmental factors. The protocols RPL (Routing Protocol), 6LoWPAN and
COAP are implemented using Cooja simulator. In this simulation tool, the results
of the protocols are analyzed for different types of motes namely, skymote, cooja
motes, z1mote, and wismote.
4 System Architecture
The sensors are clustered for measuring blood pressure as the sensing parameter. The
data are collected through the sensor (for blood pressure) which has been clustered
using a various routing protocol such as RPL, 6LoWPAN and even in CoAP, and it is
monitored by the monitoring system. When the blood pressure increases the threshold
level of (above the normal range) both the systolic and diastolic pressure, the alert
signal is generated. By the help of patient’s record, this alert signal is indicated to
the physician. This is illustrated in the Fig. 1 as system architecture.
In CoAP, the device can be remotely controlled. This is an application layer
protocol [9] that can be used from anywhere. 6LoWPAN is designed for lossy and
low-power network which is mainly used for small devices. It transfers data at lower
rates, and hierarchical routing is used in this process. In order to monitor the BP
level of the patient, the clustering is performed by these different protocols. These
BP sensors (sphygmomanometer-based) are wrapped to the patient’s arm, and the
observed values are sent to the cluster head which manages the values of the BP
sensor nodes [10]. Every cluster sends its values to the monitoring system. In the
case of Pregnancy-Induced Hypertension cases, a threshold blood pressure value is
set and an alert signal is generated whenever the observed values deviate above the
threshold.
5 Simulation Setup
For the simulating this scenario, Cooja simulator based on Linux is used. Cooja is an
emulator. It uses the images of the mote types for emulating the real-time working of
different sensor motes. It supports different motes types such as skymote, waspmote,
etc. It has the provision for real mote connectivity support but in the experimental
state only. Commonly skymote is used for most of the simulation.
The following are the simulation metrics considered and measured using the formula:
Number of received packets
Packet Reception Rate (PPR)
Number of transmitted packets
Remaining power
Energy Consumtion (EE) × 100
Initial Power
1
Expected number of transmission (ETX )
PPR(down) × PPR(up)
Figure 2a, b illustrate the simulation of COAP protocol with 10 and 20 motes
in the Cooja simulator. Based on the simulation, the measurements of the aforesaid
metrics such as packet Reception Rate (PPR), Energy consumption (EE), Expected
number of transmission (ETX) and lifetimeare calculated and results are discussed
in the following section.
6 Result
The Round trip time (RTT) for the CoAP for the 10 and 20 motes are calculated
and projected as the bar-chart in Fig. 3. It is inferred that for 10 motes, the delay
incurred by the round trip time is on average 83.75 ms. But it increases to an average
of 270 ms when it is in the 20-mote scenario. So, as the scalability increases the delay
is also increased. It increased rate supposed to be around 167 ms, but it is measured
176 K. Kabilan et al.
Fig. 3 Comparison of RTT (Round trip time) of CoAP for 10 and 20 motes
to be around 270 ms which is 102 ms in excess. This may be due to the excessive
transmission hops and more number of nodes to be addressed.
In case of power consumption of CoAP (Fig. 4), for the 10 motes deployment, on
average of 22.5% of power has been consumed. When the population of the node is
doubled the average power consumed by CoAP is 73.25%. This shows that nearly
5% of power has been additionally consumed on par with scalability.
On the other hand, the power estimation of 6LOWPAN has been analyzed and
illustrated in Fig. 5. Since it is for the low-powered devices, the estimation has been
categorized as Low-Power Mode (LPM), CPU power utilization, Radio Listen and
Radio Transmission. In LPM, the average power consumed by 10 mote scenario is
0.12 mW as constant. Since all motes are in idle state, it is assumed to be standard
power dissipation by every mote based on the network physical model. The CPU
power utilization utilizes 0.3 mW of power for more instructional operation. The
radio cycle consumes major power utilization as 0.5 mW for listening mode and
0.25 mW for transmission mode.
Analysis of 6LOWPAN and CoAP Protocols for Maternal … 177
When the mote population is increased to 20 (Fig. 6), the LPM and CPU are
unaffected and seem to be same 0.12 mW and 0.3 mW as standard respectively. But
the radio listen is slightly increased to 0.1 mW, i.e., 0.6 mW in case of 20 motes
scenario. And for radio transmit, it consumes 0.4 mW which is double the power
consumed in 10 mote scenario. In Fig. 7, the overall average power consumed by
the 6LOWPAN has been illustrated below. It could be inferred that on par with the
scalability of the network, the average listen power has been increased and average
power to transmit has been decreased. This trend may be due to the increase in the
concentration of the motes, and this will lead to a decrease of proximity between
motes thus, less energy for transmission has been used. Figures 8 and 9 illustrate
178 K. Kabilan et al.
the expected number of transmission and duty cycle for 10 and 20 motes using
6LoWPAN. The power utilized for the listening has a greater impact on power drain
when compared to transmission duty cycle. This may be due to the unwanted listening
of motes in the network.
Analysis of 6LOWPAN and CoAP Protocols for Maternal … 179
From this simulation of two different protocols, it could be inferred that both have
their advantages and disadvantages. In that case, if a device needs to be controlled by
a remote method, both COAP protocol and 6LoWPAN could be applied. 6LoWPAN
yields better energy saving features. So, for battery operated smart devices with dense
population, 6LoWPAN is considerable.
Real-time continuous monitoring and the early detection of hypertension impro-
vise the quality of human life. The dataset values are only given as input to the
protocol. The new proposed architecture involves the WBANs that helps to moni-
tor the remotely located patients. The PIH detection method along with the COAP
180 K. Kabilan et al.
and 6LoWPAN protocols helps to detect the presence of PIH in patients. Hence,
this system has been devised to provide a continuous monitoring and provide those
pregnant women’s with timely assistance. These data are used as the reference for
the physicians for future analysis on the patients.
This work has to be extended for evaluation of COAP and 6LOWPAN based on
Quality of Experience (QoE). As a future work, a model for QoE estimation has to
be proposed for evaluation of the performance of the network routing protocols for
WBAN.
References
1. Leng C, Yu H, Wang J, Huang J (2013) Securing personal health records in clouds by enforcing
sticky policies. TELKOMNIKA Indones J Electr Eng 11(4):2200–2208
2. Haiwen H, Zheng W (2013) A privacy data-oriented hierarchical MapReduce programming
model. TELKOMNIKA Indones J Electr Eng 11(8):4587– 4593
3. Zhang Y, Xu YY, Shang L, Rao K (2007) An investigation into health informatics and related
standards in China. Int J Med Inform 76(8):614–620
4. Orlando REP, Caldeira MLP, Lei S, Rodrigues JPC (2014) An efficient and low cost windows
mobile BSN monitoring system based on TinyOS. J Telecommun Syst 54(1):1–9
5. Yuce MR (2010) Implementation of wireless body area networks for healthcare systems. Sens
Actuators A Phys 162(1):116–129
6. Clifton L, Clifton DA, Pimentel MAF, Watkinson PJ, Tarassenko L (2014) Predictive moni-
toring of mobile patients by combining clinical observations with data from wearable sensors.
IEEE J Biomed Health Inform 18(3):722–730
7. Parane KA, Patil NC, Poojara SR, Kamble TS (2014) Cloud based intelligent healthcare moni-
toring system. In: Proceedings of international conference on issues and challenges in intelligent
computing techniques (ICICT), Ghaziabad, Indian, 7–8 Feb, pp 697–701
8. Bortenschlager M (2007) Current developments and future chllenges of coordination in per-
vasive environments. In: 16th IEEE international workshops on enabling technologies: infras-
tructure for collaborative enterprises, WETICE, 18–20 June 2007, pp 51–55
9. Gowrishankar S, Basavaraju TG, Manjaiah TG, Handy MJ, Haase M, Timmermann D (2002)
Low energy adaptive clustering hierarchy with deterministic cluster-head selection. In: IEEE
international conference on mobile and wireless communications networks, Stockholm
10. Alwan M, Dalal S, Mack D, Turner B, Leachtenauer J, Felder R (2006) Impact of monitoring
technology in assisted living. Outcome pilot. IEEE Trans Inform Technol Biomed
Performance Analysis of LMS
and Fractional LMS Algorithms
for Smart Antenna System
Abstract Adaptive smart antennas are used as spatial filters for receiving the
intended signals arriving from specific directions while nullifying the reception of
undesired signals emanating from other directions. The LMS algorithm becomes
most prominent technique used in several applications including beamforming of
antenna array because of its simplicity and robustness. In this paper, a variant of
LMS called Fractional LMS (FLMS) is proposed for updating the complex weights
of the uniform circular array and its performance is compared with standard LMS
algorithm. Simulation results show that FLMS converges faster than LMS and gives
a good reduction of side lobes as compared to LMS algorithm.
1 Introduction
K. Sridevi (B)
ECE Department, Acharya Nagarjuna University, Guntur, India
e-mail: kadiyamsridevi1980@gmail.com
A. Jhansi Rani
ECE Department, VR Siddhartha Engineering College, Vijayawada, India
e-mail: jhansi9rani@gmail.com
to dynamically nullify the interferences while aiming at the intended user [6, 11].
These adaptive smart antennas are used as spatial filters for receiving the intended
signals arriving from specific directions while nullifying the reception of undesired
signals emanating from other directions.
Side lobes not only cause interference to other users but also consume power.
Therefore, in this work, attention is paid on sidelobe level reduction [1–5]. In the
adaptive smart antenna, reduction of side lobes is another issue. Like an ordinary
array, in an adaptive smart antenna, many reductions of side lobes is not possible
because beamforming algorithms are used for updating the weights to generate main
beam and nulls in specific directions in addition to array synthesis methods.
The LMS algorithm [6, 7] becomes most prominent technique used in several
applications including the beam forming of antenna array because of its simplicity
and robustness.
In this paper, a variant of LMS called Fractional LMS (FLMS) is proposed for
updating the weights of the uniform circular array and its performance is compared
with standard LMS algorithm.
The circular geometry has N number of isotropic elements are uniformly distributed.
Circular geometry offers several advantages over other geometries because of its
symmetry and does not possess edge elements.
The array factor of this configuration is given by [8]
N
AF(θ) In e j(ka cos(θ−Φn )+αn ) (1)
n1
2πa N
ka di , (2)
λ i1
where In and n are the amplitude and phase excitation of the nth element. dn is
the distance between elements, k 2π/λ is the wave number, θ is the plane wave
incidence angle and λ is the signal wavelength. In the x-y plane, nth element angular
position (n ) is given by
N
2π
Φn di (3)
ka i1
The phase excitation of the nth element for directing the main beam towards θ0
direction is given by
2 FLMS Algorithm In
The adaptive weight mechanism in the LMS algorithm allows only the first derivative,
whereas, in fractional LMS, a fractional derivative [9, 10] is used in addition to the
first derivative. The corresponding LMS and FLMS equations are given in (8) and (9)
respectively. Based on the minimum mean square criterion [11], the uplink weights
are updated.
The cost function is given by [11]
The gradient is zero when the minimum occurs. The solution for the optimum
weights is thus given by
−1
wopt (k) Ryy s (7)
where μf is the fractional step size, v is the fractional derivative order that varies
between 0 and 1. Assume
This is the new weight update adaptation equation for the FLMS [9]. Steady-
state performance improves by decreasing the fractional order at the cost of a slight
decrease in convergence speed.
3 Simulation Results
0.8
0.7
0.6
Array Factor
0.5
0.4
0.3
0.2
0.1
0
-90 -60 -30 0 30 60 90
AOA(deg)
Fig. 1 The radiation pattern of uniform circular array desired angle is −30° and two interferences
at −40° and 20°
Performance Analysis of LMS and Fractional LMS Algorithms … 185
0.7
0.6
Array Factor
0.5
0.4
0.3
0.2
0.1
0
-90 -60 -30 0 30 60 90
AOA(deg)
Fig. 2 The radiation pattern of uniform circular array desired angle is 20° and two interferences at
30° and −40°
LMS
1
0.9
0.8
0.7
Mean square error
0.6
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200 250 300 350 400 450 500
Iteration no.
Both algorithms drive the main beam towards signal of interest and keep the nulls
towards signals of noninterest efficiently. As compared to beamforming with LMS,
FLMS gives less sidelobe level and convergence speed of FLMS is faster than LMS
186 K. Sridevi and A. Jhansi Rani
FLMS
1
0.9
0.8
0.7
Mean square error
0.6
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200 250 300 350 400 450 500
Iteration no.
algorithm as shown in Figs. 3 and 4. The weights of LMS and FLMS for N 8
elements are shown in Table 1.
4 Conclusions
In this paper, a variant of LMS algorithm which is fractional LMS (FLMS) algo-
rithm is proposed for a uniform circular array of eight isotropic elements. FLMS is
somewhat more complex rather than LMS as it involves fractional derivative. Both
algorithms drive the main beam towards signal of interest and keep the nulls towards
signals of noninterest efficiently. As compared to beamforming with LMS, FLMS
Performance Analysis of LMS and Fractional LMS Algorithms … 187
gives less sidelobe level and convergence speed of the FLMS is faster than the LMS
algorithm.
References
1. Azhary EI, Afifi MS, Excel PS (1998) A simple algorithm for sidelobe cancellation in a partially
adaptive linear array. IEEE Trans Antennas Propag 37(11):1484–1486
2. Yan KK, Lu Y (1997) Sidelobe reduction in array-pattern synthesis using genetic algorithm.
IEEE Trans Antennas Propag 45(7):1117–1122
3. Im H-J, Choi S, Choi B, Kim H, Choi J (2001) A survey of essential problems in the design of
Smart antenna system. Microw Opt Technol Lett 33:31–34
4. Albagory Y, Dessouky M, Sharshar H (2007) An approach for low side lobe beam forming in
uniform concentric circular arrays. Int J Wirel Pers Commun 43(4):1363–1368
5. Recioui AA, Bentarzi H, Dehmas M, Chalal M (2008) Synthesis of linear arrays with side lobe
level reduction constraint using genetic algorithms. Int J Microw Opt Techn 3(5):524–530
6. Godara LC, Senior Member, IEEE (1997) Applications of antenna arrays to mobile com-
munications, part II; beam forming and directional of arrival considerations. Proc IEEE
85(8):1195–1245
7. Widrow B, Steams SD (1985) Adaptive signal processing. Pentrice-Hall Inc., New Jersey
8. Ioannides P, Balanis CA (2005) Uniform circular and rectangular arrays for adaptive beam
forming applications. Antennas Wirel Propag Lett. IEEE 4:351–354
9. Asif Zahoor RM, Qureshi IM (2009) A modified least mean square algorithm using fractional
derivative & its application to system identification. Eur J Sci Res 35(1):14–21
10. Miller KS, Ross B (1993) An introduction to the fractional calculus & fractional differential
equations. Wiley New York
11. Gross F (2005) Smart antenna for wireless communication. Mcgraw-hill
Accuracy Assessment of Classification
on Landsat-8 Data for Land Cover
and Land Use of an Urban Area
by Applying Different Image Fusion
Techniques and Varying Training
Samples
1 Introduction
Images can be obtained from different types of sensors for the same region of
interest. The quality of the image depends upon the type of sensor and its reso-
lution. Image fusion is a method where multispectral bands can be combined with
panchromatic (PAN) band to get the composite image of higher spatial resolution.
This results in enhancements of the image for visual interpretation and quantitative
assessment. Fusion can be applied to images of earth observation satellites providing
high-resolution panchromatic and low-resolution multispectral data [1].
Fusion of satellite images can be carried out using methods which work on three
different levels: per-pixel level, the features extracted from the image and the deci-
sion level, [2, 3] used Sentinel-2 and Landsat-8 dataset for fusion. Landsat-8 bands
2–7 of spatial resolution 30 m and Sentinel-2 bands 2–7 with spatial resolution of 10
and 20 m, resampled to 30 m were fused. Authors used MLC and SVM supervised
classification methods wherein SVM gave better accuracy than MLC and also found
that Sentinel-2 datasets gave better classification accuracy [4]. Reference [3] used
Sentinel-2 and Landsat-8 datasets for fusion and observed good geometric corre-
spondence between different bands.
LISS-IV dataset and applied Brovey Transform, Principal Component Analysis,
Multiplicative Technique (MT), Intensity Hue Saturation and High Pass Filtering
(HPF) methods were used for image fusion [5]. Reference [6] used a dataset of
Landsat-5 TM bands with a spatial resolution of 30 m and World View-2 (WV-2)
with a spatial resolution of 2 m. Authors observed that fusing bands of 2 m spatial
resolution can attain higher producer’s, users, and overall accuracies as compared to
the classification of medium-resolution Landsat and WV-2 data. Reference [7] has
also attempted a fusion of Landsat-8 imagery and found that it enhances classification
accuracies. Reference [8] also found similar results on merged data of LISS-III and
LISS-IV.
From all the papers studied related to this work, it has been observed that fusion
has been carried out using images from very high or high spatial resolution satellites,
commercially available. In this paper, fusion is attempted using images from moder-
ate/coarse spatial resolution satellite Landsat-8 (L1T product), available free of cost.
Image fusion of multispectral (MS) bands having a spatial resolution of 30 m with
PAN band of the same sensor spatial resolution 15 m has been carried out and results
of fusion with reference images are recorded. Various techniques are used to perform
the fusion. The accuracy of fusion methods are analyzed and further classification
is performed by supervised and unsupervised classification methods. Impact of a
number of training samples on the classification is also analyzed. Many studies have
been carried out on land use and land cover classification using fusion methods on
images from the previous series of Landsat satellite. Landsat-8 data has been found
to be appropriate for land use and land cover classification.
Accuracy Assessment of Classification on Landsat-8 Data … 191
Fig. 1 The study area is in the center of Maharashtra, India. Left-hand-side is a map of the Maha-
rashtra state showing study area and the right-hand side is image subset obtained from Landsat-8
The study site shown in Fig. 1 is an urban area in Aurangabad city of Maharashtra,
India that contains manmade and natural features. The study area contains roads,
vegetation, manmade structures, water bodies and bare land. The study area lies
between 19° 10 1.6 and 21° 16 29.75 North Latitude and 74° 43 44.83 to 76°
53 42.79 East Longitude.
The image is acquired from Landsat-8 satellite, launched on 11.02.2013, having
11 bands with spatial resolution ranging from 30–100 m for multispectral bands and
15 m for PAN band. Level 1T (terrain corrected) scene of the OLI/TIRS sensor, of
PATH/ROW: 146/46 is downloaded from the USGS Earth Explorer (United States
Geological Survey) (http://earthexplorer.usgs.gov/) [9, 10], of March 21, 2017.
L1T images are preprocessed and various fusion methods are applied and these
images are used for classification. L1T images without calibration are also classified
to check the impact of radiometric correction on classification. It was observed that
classification accuracy increases to a large extent on calibrated images. The L1T
image is presented in units of Digital Numbers (DNs), which can be rescaled to Top-
Of-Atmosphere (TOA) spectral reflectance. The accuracy of the corrected image
was verified with the Landsat surface reflectance higher level image obtained from
USGS.
192 P. K. Birdi and K. V. Kale
According to [2], image fusion methods can be broadly put into two classes, Color-
related techniques, and Statistical or numerical methods. Color-related techniques
consist of the color composition of three image channels in the RGB color space
as well as other color transformations. Statistical approaches are developed on the
basis of channel statistics including correlation and filters. Techniques like PCA and
regression belong to this group. In this research work methods used are: Intensity Hue
and Saturation (IHS) [11], Gram-Schmidt (GS) [12], PC Spectral Sharpening (PC),
Brovey Fusion [13], CN Spectral Sharpening [14] and Proposed Method: Layer Stack
R, G NIR with PAN, resampled using Nearest Neighbor (NN) and Cubic Convolution
(CC): A layer stacking method has been used in which RGB composite of Bands 5,
4, and 3 is resampled to the 15 m spatial resolution and stacked with the PAN band
producing LayStkCC and LayStkNN images. The NN method considers the closest
pixels to the interpolated point. CC method uses the surrounding sixteen pixels.
Cubic polynomials are fitted along the four lines of four pixels surrounding the point
in the image. Bands 5, 4, and 3 of the multispectral image are selected as they have
maximum reflectance values of the present land cover features in the scene. All the
fusion methods were applied on a spatial subset of 141 × 135 pixels.
Supervised classification methods are based on the prior knowledge of the area to be
classified. Methods used are Maximum Likelihood Classifier (MLC), Parallelepiped,
Mahalanobis distance, Minimum distance, Spectral Angle Mapper (SAM), Spectral
Information Divergence (SID) and Neural Net (NN). The classification was also
performed using unsupervised clustering methods: IsoData and K-Means to classify
reference and fused images [11, 15].
Five land cover classes namely water, road, vegetation, building and bare lands
were found to be present in the region of interest. Training samples were created for
each class using the reference data obtained from ground truth data. Two different
set of training samples were taken for classification. First set contained 290 points
and the second set contained 543 points as training samples. A different set of 205
points were created for testing the classification accuracies.
Accuracy Assessment of Classification on Landsat-8 Data … 193
Spectral reflectance values of five land cover classes were recorded in seven bands
and the reflectance curve of L1T product was compared with the calibrated L1T
product. As shown in Fig. 2, different land covers are prominently differentiable
from each other after calibration. This observation was supported by classification
accuracy results. MLC was applied on the L1T image before and after calibration
and it was observed that overall accuracy increased from 46 to 72% with kappa
coefficient 0.32 and 0.65 respectively.
Also, relative mean bias (% mean bias) is defined as mean bias divided by reference
mean (µref ). SDD (Standard Deviation Difference is defined as
Fig. 2 Spectral response curve of the L1T image before and after calibration/correction
194 P. K. Birdi and K. V. Kale
[16] and [17] also suggested the comparison of the histogram of fused images
with the reference image to assess the quality of the fused image. Histogram of all
fused images is compared with a reference image and found to be similar except
IHS fused image. It is observed that the mean and SD value of all fused images in
band 4 is closest to mean and SD value of reference image showing similarity in
spectral information. Mean bias and SDD values show that spectral quality of fused
images using CN fusion method is closest to reference image, i.e., distortion is less.
LayStkCC and LayStkNN images show less distortion in bands 4 and 3 but more in
2 as indicated by mean bias values. According to [16], SDD indicates the quantity of
information lost or added (noise) during the fusion process. If noise is added, SDD
is negative and if the information is lost, the value is positive. RMSE is also recorded
in Table 1 along with mean bias and SDD values.
Classification accuracy was evaluated using the confusion matrix and it was observed
that increase in training points increased overall accuracy by 2% only. A minimum of
35 training points per land cover class is used to prepare training set. It was observed
that overall accuracy of unsupervised methods is less than 35%. Classification accu-
Accuracy Assessment of Classification on Landsat-8 Data … 195
racy of the L1T image with DN values has a lowest overall accuracy of 46% and the
highest overall accuracy is generated by MLC method for the proposed LayStkNN
fused image followed by PC-fused image and then by CN-fused images. MLC has
highest kappa coefficient for LayStkNN followed by PC and then CN-fused image.
Confusion matrix shows that fused images have higher classification accuracies. Pro-
ducer’s Accuracy (PA) and User’s Accuracy (UA) values of water body remain same
after fusion. PA value of vegetation class increases from 81.25 to 90.55% and for road
class from 64.29 to 84.34% for the LayStkNN image as compared to the reference
image. This is recorded in Table 2 which shows Producer’s and User’s Accuracy of
DN L1T image, Reference Corrected Image and Fused Images. Overall Accuracy of
classification on fused images is given in Fig. 4.
In this study, seven image fusion methods are used to fuse Landsat-8 images in
which two methods of creating fused images using the concept of layer stacking
experiments. Landsat-8 corrected images of three bands are resampled to the 15 m
spatial resolution and then it is stacked with the PAN band. Two methods of resam-
pling are used namely Nearest Neighbor (NN) and Cubic Convolution (CC). It was
observed that NN resampling produced a better quality fused image. Seven classifi-
cation methods are applied to the reference image and fused images to classify the
area into desired land cover features. Two sets of training samples with 290 and 543
training points were used to classify the images. Different set of 205 points were
created for testing the classification accuracies. It is observed that increasing sample
training points increases accuracy of classification by 2% only. MLC and Neural Net
gave a good accuracy of classification.
196
Table 2 Producer’s and user’s accuracy of DN L1T image, reference corrected image (L8 MS) and fused images
DN L1T img Ref img LayStkNN PC img LayStkCC CN img Brovey img GS img IHS img
(MS) img img
PA% UA% PA% UA% PA% UA% PA% UA% PA% UA% PA% UA% PA% UA% PA% UA% PA% UA%
WB 100 59.4 100 95 100 95 100 100 100 100 100 100 100 97.4 100 74.5 0 0
BLD 47.73 40.4 68.18 65.22 56.13 68.31 59.7 68 54.39 63.3 56.1 69.6 63.7 58.3 63.2 57.5 60.2 54.8
VEGT 43.75 60.9 81.25 78.79 90.55 80.7 89.8 87 90.16 76.1 85.4 82.3 80.3 73.4 60.6 71.4 48.4 59.5
RD 35.71 30.6 64.29 55.1 84.34 70.71 76.3 60 72.62 65.6 77.4 60.8 55.4 50.3 60.1 49.3 42.3 52.6
BL 33.33 46.2 85.33 87.5 57.14 78.43 58.4 75 53.57 75.8 60.5 71.6 44.3 76.5 48.6 69.4 33.6 38.5
Note WB Water Body, BLD Buildings, VEGT Vegetation, RD Road, BL Bare Land, PA Producer’s Accuracy, UA User’s Accuracy
P. K. Birdi and K. V. Kale
Accuracy Assessment of Classification on Landsat-8 Data … 197
This study can be extended on the fusion of images from different sensors. Also,
classification can be further performed on fused images at sub-class level. Quality
of fusion methods can be assessed for sub-class classification.
References
1. Richards JA, Jia X (2006) Remote sensing digital image analysis—an introduction. Springer
2. Pohl C, Van Genderen JL (1998) Multisensor image fusion in remote sensing: concepts, meth-
ods, and applications. Int J Remote Sens 19:823–854
3. Topaloglua RH, Sertel E, Musaoglu N (2016) Assessment of classification accuracies of
Sentinel-2 and Landsat-8 data for land cover/use mapping. Int Arch Photogram Remote Sens
Spat Inf Sci XLI-B8
4. Yan L, Roy DP, Zhang H, Li J, Huang H (2016) An automated approach for sub-pixel registration
of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 multi spectral instrument (MSI)
imagery, MDPI. J Remote Sens. http://www.mdpi.com/journal/remotesensing
5. Srikrishna Shastri C, Ashok Kumar T, Koliwad SP (2016) Advances in classification techniques
for semi urban land features using high resolution satellite data. Int J Adv Remote Sens GIS
5(3):1639–1648
6. Kumar U, Milesi C, Nemani RR, Basu S (2015) Multi sensor multi resolution image fusion for
improved vegetation & urban area classification. Int Arch Photogram, Remote Sens Spat Inf
Sci XL-7/W4
7. Lazaridou MA, Karagianni AC (2016) Landsat 8 multispectral and pansharpened imagery pro-
cessing on the study of civil engineering issues. In: International archives of the photogram-
metry, remote sensing and spatial information sciences, XXIII ISPRS congress, vol XLI-B8
8. Hebbara R, Sesha Sai MVR (2014) Comparison of LISS-IV MX & LISS-III + LISS-IV merged
data for classification of crops. ISPRS Ann Photogram Remote Sens Spat Inf Sci II-8
9. http://earthexplorer.usgs.gov. Accessed 25 Apr 2017
10. http://usgs.gov/Landsat8DataUserhandbook.pdf. Accessed 30 May 2017
11. Mather PM (2004) Computer processing of remotely-sensed images an introduction. Wiley,
pp 149–169 and 203–245
12. Laben CA, Bernard V, Brower W (2000) Process for enhancing the spatial resolution of mul-
tispectral imagery using pansharpening. US Patent 6,011,875
13. Jensen JR (2005) Introductory digital image processing: a remote sensing perspective. Prentice
Hall, Upper Saddle River, NY
14. Vrabel J, Doraiswamy P, McMurtrey J, Stern A (2002) Demonstration of the accuracy of
improved resolution hyperspectral imagery. SPIE Symp Proc 4725:556–567
198 P. K. Birdi and K. V. Kale
15. Robert and Schowengerdt (2006) Remote sensing: models and methods for image processing.
Academic Press, Elsevier
16. Wald L, Ranchin T, Mangolini M (1997) Fusion of satellite images of different spatial reso-
lutions: assessing the quality of resulting images. Photogram Eng Remote Sens, ASPRS, Am
Soc Photogram Remote Sens 63(6):691–699
17. Thomas C, Wald L (2006) Comparing distances for quality assessment of fused images. In:
26th EARSeL symposium, Varsovie, Poland. Millpress, pp 101–111
Performance Analysis of a PV System
Using HGB Converter
1 Introduction
The renewable energy sources’ demands are emerging in the recent years due to the
fossil fuels shortage and effect of greenhouse gases. Nowadays, pollution-free solar
cells are preferred as the alternative source of energy in many applications due to the
following advantages: less maintenance of solar cells and advanced technology in
power electronic components. The terminal voltage of the Photovoltaic (PV) array is
of low magnitude which has to be boosted for utility applications. Power electronic
converters act as an interface between the solar array and the load/utility grid. As
most of the practical loads and utility grid are rated at high voltage, PV array voltage
has to be boosted to a suitable value of voltage [1–3].
Conventionally, a dc–dc boost converter is used to step up the PV array voltage
to a certain voltage level since the voltage gain of the converter is limited by the
leakage resistance of inductor present in the converter [4–7]. To improve the voltage
gain, several topologies have been proposed in the literature using transformers and
coupled inductors [8–10]. Alternatively, to achieve high voltage gain without using
the coupled inductors and transformers, Switched Capacitor (SC) multipliers have
been employed in the converter [11–15]. Although, the PV system with SC multipliers
produces high voltage gain, it has high input current ripple which can be minimized
by increasing the size of the inductors. As the size increases, it makes the converter
bulky and costly in addition to slowing down the transient response of the system.
Use of interleaving technique in power electronic converters act as the better solution
to minimize the input current ripples [16–20].
Thus, the proposed system uses a High Gain Boost (HGB) converter which is
constructed to cancel the ripple in the input current and to increase the voltage gain
using two interleaved inductors and SC voltage multiplier cells. Analysis of the
proposed system is discussed in this paper by comparing the simulated results with
the experimental readings.
Photovoltaic (PV) array is integrated to a dc load through a high gain boost converter
and the block diagram of the system is shown in Fig. 1. Switching pulses of HGB
converter are generated using gate pulse generator.
Due to the interleaving technique used in the converter, current ripple at the input is
minimized and thus the ripple at the output voltage is also minimum which makes the
converter more efficient. The power circuit of HGB converter need not be altered for
increasing the voltage gain. An additional diode capacitor circuits can be employed
in the existing power circuit to increase the voltage gain which is the added advantage
of the proposed system.
3.1 Mode 1
In this mode, switch, Sa is turned on and switch, Sb is turned off. Figure 3a shows the
equivalent circuit of the HGB converter in this mode. During this mode, the energy
from the PV array gets stored in the inductor, La and the diode D3 gets reverse biased.
The inductor, Lb discharges the stored energy by forcing the diode D2 to be forward
biased since the switch, Sb is turned off. The inductor, Lb charges the capacitor, Cc
and the capacitors, Ca and Cb discharge the stored energy to the load.
Fig. 4 Waveforms of input current, IPV , inductor currents ILa and ILb and the gate pulses in two
modes of operation
3.2 Mode 2
In this mode, Sb is turned on and Sa is turned off and the converter’s equivalent
circuit is shown in Fig. 3b. During this mode, the inductor, La discharges the stored
energy through D3 and charges the capacitor Ca . When the switch, Sb is conducting,
inductor, Lb stores the energy and the capacitors Cb and Cc will be in parallel, leading
to a Switched Capacitor circuit. Thus, the capacitor, Cc clamps the voltage across
Cb .
While the switches are acting complimentarily, one inductor is charged and the
other inductor is discharged to make the input current ripple zero as the sum of the
currents through La and Lb is equal to the input current of HGB converter as shown
in Fig. 4.
As sizing of the inductors and capacitors of HGB converter are the key features
of the proposed system, design of HGB converter is explained in detail in the next
section.
Performance Analysis of a PV System Using HGB Converter 203
Considering the turn-on period of the converter as DTs when switch, Sb and switch Sa
are on and off, respectively, HGB converter is designed. During turn-on period, DTs
(mode 2) voltage across the inductors, La and Lb is (VPV − VCa ) and VPV , respectively.
During turn-off period, (1 − D) Ts (mode 1) voltage across the inductors, La and Lb
are VPV and (VPV − VCc ), respectively.
According to volt-second balance equation, average voltages of the inductors are
diLa
La D(VPV − VCa ) + (1 − D)VPV (1)
dt
diLb
Lb DVPV + (1 − D)(VPV − VCc ) (2)
dt
where
VPV is PV array voltage
VCa and VCc are voltage across the capacitors of Ca and Cc , respectively
D is the converter duty cycle
In steady state, inductors’ average voltage must be equal to zero [21]. Hence,
Eqs. (1) and (2) are equated to zero to obtain the voltage across Ca and Cc which is
expressed as
1
VCa VPV (3)
D
1
VCc VPV (4)
1−D
From the above equations, it is found that the voltage across Ca and Cc are pro-
portional to each other, i.e.,
1−D
VCa VCc (5)
D
D
VCc VCa (6)
1−D
In steady state, the voltage across the capacitors Cb and Cc in the switched capac-
itor circuit are same. Therefore,
Under steady-state condition, net capacitor current is zero [21]. Hence, Eq. (8)
can be rewritten as
1 VCa + VCb
iLa (9)
D R
From the circuit diagram of HGB converter, the output voltage, V0 is given as
The voltage gain of the converter, computed by combining Eqs. (5)–(7) and (11)
is given as follows:
V0 1
(12)
VPV D(1 − D)
From Eq. (12), it is evident that the voltage gain is increased due to the additional
parameter D in the denominator of the conventional boost converter’s voltage gain
equation.
The inductor current ripple equations of the HGB converter are given by
VPV
i La (1 − D) (13)
La Fs
VPV
iLb (D) (14)
Lb Fs
For achieving zero input current ripple, Eq. (15) is equated to zero and the rela-
tionship between inductors La and Lb is obtained as given below:
Performance Analysis of a PV System Using HGB Converter 205
D
Lb La (16)
1−D
For the preselected duty cycle, values of inductors can be designed using the
Eq. (16) and for 50% duty cycle, the value of inductance, La is equal to that of Lb .
When Sa is on, current through the capacitor, Ca follows the load current, I0 of HGB
converter, and thus the ripple voltage of capacitor, Ca is given by
I0
VCa (1 − D)Ts (17)
Ca
Also when Sa is on, the inductor Lb discharges the energy stored to charge the
capacitor, Cc . The ripple voltage equation of capacitor, Cc is given by
ILb
VCc (1 − D)Ts (18)
Cc
Similarly, the current through the capacitor, Cb follows the load current when
switch Sb is on and the ripple voltage of capacitor, Cb is
I0
VCb = Ts (19)
Cb
Fig. 6 Waveforms of a PV array voltage and current and b gate pulses to the switches Sa and Sb
The inductors, La and Lb charge and discharge complimentarily and cancel the
ripple in the input PV array current, IPV which is clearly depicted in Fig. 7a. Figure 7b
shows the waveforms of the boosted output voltage, V0 and output current, I0 . The
results from simulation ensure that HGB converter boost the PV array voltage with
a voltage gain of 4. The voltage gain can be increased by increasing the duty cycle.
For a further increase in the voltage gain, extra diode capacitor circuits can be added
to the existing power circuit.
Analysis of variations in input ripple current is carried out by simulating the
proposed system using different values of duty cycle and inductors and the associated
Performance Analysis of a PV System Using HGB Converter 207
Fig. 7 Waveforms of a PV array current, IPV and inductor currents, ILa and ILb and b output current
and output voltage
graph is shown in Fig. 8. It is inferred from the graph that, with constant duty
cycle, as inductance increases, input PV array current ripple decreases and with
constant inductor’s values, as duty cycle decreases, input PV array current ripple
also decreases. Hence, it can be stated that input PV array current ripple is directly
proportional to duty cycle and inversely proportional to sizes of inductors used in the
converter. From the characteristics shown in Fig. 8, HGB converter can be designed to
obtain the ripple in input PV array current within the required limits by appropriately
choosing the duty cycle and inductor values.
Figure 9 shows the comparison of analytical and simulated results for different
inductor values and duty cycles of the converter. From this graph, it is clear that
the simulated and analytical results are in close approximation which validates the
effectiveness of the proposed system.
208 N. Sujitha and S. Krithiga
Fig. 9 Comparison of analytical and simulated input PV array current ripple for different duty
cycle
6 Experimental Investigation
The hardware prototype of the converter was developed in the laboratory and experi-
mentation of the proposed system is carried out. The prototype of experimental setup
and PV module used for the investigation are shown in Fig. 10.
HGB converter is fabricated using two MOSFETs (IRF540N), three diodes
(MUR360), two inductors of 220 µH, and three capacitors of 200 µF. Using
PIC18F45K20 microcontroller, the gate pulses of 25 kHz switching frequency is
generated and fed to the HGB converter through TLP250 MOSFET driver. The gen-
erated gate pulses Vga and Vgb with 50% duty cycle are shown in Fig. 11.
At an irradiation of 350 W/m2 , the experimentation is carried out with the resistive
load of 160 and the results are shown in this section. In Fig. 12, waveforms of
voltage and current of PV array of 29 V and 2.79 A respectively are shown. From the
Performance Analysis of a PV System Using HGB Converter 209
Fig. 11 Gate pulses Vga and Vgb generated using PIC18F45K20 microcontroller
results, it is found that PV array current ripple is within the limits. Figure 13 shows the
inductor currents which are responsible for input PV array current ripple cancelation.
Also, output voltage and current waveforms of 109 V and 0.698 A respectively are
shown in Fig. 14.
210 N. Sujitha and S. Krithiga
Analysis of variation in the ripple of input current for a range of duty cycle for a
fixed inductance of 220 µH is carried out as shown in Fig. 15. It was found that the
analytical, simulated, and experimental results are in close agreement.
Performance Analysis of a PV System Using HGB Converter 211
Fig. 15 Comparison of input PV array current ripple over a range of duty cycle
7 Conclusion
A PV system is proposed using HGB converter in this paper. The proposed system
uses the interleaving technique to cancel the ripple in the input current with the
preselected duty cycle and by using the switched capacitor circuit, the converter
provides a high voltage gain. The proposed system can be used for water pumping
applications and for driving dc loads in homes. The proposed system is simulated
using PSIM software and the experimental prototype is developed and investigated in
the laboratory. Performance analysis of the system is studied to observe the effect of
change in input current ripple for different duty cycles and inductor values. Working
of the system is validated with the analytical, simulated and experimental results
which are in close agreement. This paper can help the designers to choose appropriate
duty cycle and inductor values for obtaining the input PV array current ripple within
the limits.
Acknowledgements The authors would like to thank Mr. Mallem Sai Ram, M.Tech. Scholar,
School of Electrical Engineering, VIT University, Chennai, India, for his contributions in the exper-
imentation of the proposed system.
212 N. Sujitha and S. Krithiga
References
1. Zhu L (2006) A novel soft-commutating isolated boost full-bridge ZVS-PWM DC–DC con-
verter for bidirectional high power applications. IEEE Trans Power Electron 21(2):422–429
2. Tseng KC, Liang TJ (2004) Novel high-efficiency step-up converter. Proc Inst Elect Eng Elect
Power Appl 151(2):182–190
3. Krithiga S, Ammasai Gounden NG (2014) Power electronic configuration for the opera-
tion of PV system in combined grid-connected and stand-alone modes. IET Power Electron
7(3):640–647
4. Rosas-Caro JC, Ramirez JM, Peng FZ, Valderrabano A (2010) A DC–DC multilevel boost
converter. IET Power Electron 3(1):129–137
5. Dongyan ZZ, Pietkiewicz A, Cuk S (1999) A three-switch high-voltage converter. IEEE Trans
Power Electron 14(1):177–183
6. Maksimovic D, Cuk S (1991) Switching converters with wide DC conversion range. IEEE
Trans Power Electron 6(1):151–157
7. Middlebrook RD (1988) Transformerless DC-to-DC converters with large conversion ratios
IEEE Trans. Power Electron 3(4):484–488
8. Barreto LHSC, Praça PP, Oliveira DS, Bascope RPT (2011) Single-stage topologies integrating
battery charging, high voltage step-up and photovoltaic energy extraction capabilities. Electron
Lett 47(1):49–50
9. Hsieh YP, Chen JF, Liang TJ, Yang LS (2013) Novel high step-up DC–DC converter for
distributed generation system. IEEE Trans Ind Electron 60(4):1473–1482
10. Leu CS, Huang PY, Li MH (2011) A novel dual-inductor boost converter with ripple cancel-
lation for high-voltage-gain applications. IEEE Trans Ind Electron 58(4):1268–1273
11. Berkovich Y, Axelrod B (2011) Switched-coupled inductor cell for DC–DC converters with
very large conversion ratio. IET Power Electron 4(3):309–315
12. Ye Y, Cheng KWE, Chen S (2017) A high step-up PWM DC-DC converter with coupled-
inductor and resonant switched-capacitor. IEEE Trans Power Electron 32(10):7739–7749
13. Krithiga S, Ammasai Gounden N (2014) Investigations of an improved PV system topology
using multilevel boost converter and line commutated inverter with solutions to grid issues.
Simul Model Pract Theory 42:147–159
14. Hsieh YP, Chen JF, Liang TJ, Yang LS (2012) Novel high step-up DC converter with coupled-
inductor and switched-capacitor techniques. IEEE Trans Ind Electron 59(2):998–1007
15. Zhang N, Sutanto D, Muttaqi KM, Zhang B, Qiu D (2015) High-voltage-gain quadratic boost
converter with voltage multiplier. IET Power Electron 8(12):2511–2519
16. Tan SC, Nur M, Kiratipongvoot S, Bronstein S, Lai YM, Tse CK, Ioinovici A (2009) Switched-
capacitor converter configuration with low EMI emission obtained by interleaving and its large-
signal modelling. In: Proceedings of IEEE international symposium on circuits systems, pp
1081–1084 (2009)
17. Tan SC, Kiratipongvoot S, Bronstein S, Ioinovici A, Lai YM, Tse CK (2011) Adaptive mixed
on-time and switching frequency control of a system of interleaved switched-capacitor con-
verters. IEEE Trans Power Electron 26(2):364–380
18. Muhammad M, Armstrong M, Elgendy MA (2016) A nonisolated interleaved boost converter
for high-voltage gain applications. IEEE J Emerg Sel Top Power Electron 4(2):352–362
19. Rosas-Caro, JC, Valdez-Resendiz, JE, Mayo-Maldonado, JC, Salas Cabrera, R, Ramirez
Arredondo, JM, Salome-Baylon, J (2011) Interleaved power converter with current ripple
cancelation at a selectable duty cycle. In: Proceedings of IEEE ECCE, pp 122–126 (2011)
20. Chen RT, Chen YY, Yang YR (2008) Single-stage asymmetrical half-bridge regulator with
ripple reduction technique. IEEE Trans Power Electron 23(3):1358–1369
21. Rosas-Caro JC, Mancilla-David F, Mayo-Maldonado JC, Gonzalez-Lopez JM, Torres-Espinosa
L, Valdez-Resendiz JE (2013) A transformer-less high-gain boost converter with input current
ripple cancelation at a selectable duty cycle. IEEE Trans Ind Electron 60(10):4492–4499
Prospect of Pico-Hydro Electric Power
Generation Scheme by Using Consuming
Water Distributed to Multi-storage
Building
1 Introduction
the world. The vast development and growth in economic, industrial, and informa-
tion sector make the expansion of the urban and suburban areas in our country. This
makes people migrate to urban and suburban areas, which increase the population
density. This can be seen in the history of India for the last 10 years.
For daily activities in industrial, office, and house needs, electrical power is impor-
tant for our day-to-day life. But the power generated is less to meet the demand. But
due to a certain limitation in the availability of sources, it is not able to fulfill the
requirements which result in the shortage of power generation leading to a power
cut. This severely affects the daily activities of our life which leads to huge losses.
In this scenario, it is essential to find out an alternative way. To meet this scenario,
it is high time to go for an alternate source, which is available unknowingly in our
house, an unused energy source which is wasted unknowingly so far. The energy
is available in every overhead water tank of the multi-storage building. The freely
available hydro energy in the overhead water tank is converted into electrical energy
by using pico-hydropower generation system.
It is well identified and confirmed that plenty of unused kinetic energy which is
freely available in the overhead water tank of every multi-storage building in the form
of potential energy. This energy is getting wasted while consuming water in daily
usage. This energy can be extracted from the pipelines while flowing from overhead
tank to individual houses and same energy can be used to operate micro-hydroelectric
power generation unit in domestic level.
The pico-hydropower can be generated maximum electrical power up to 5 kW
[1–5]. In micro-hydropower up to 10 kW power can be generated. Power generation
fully depends upon the availability of hydro energy. Normally, pico-hydropower gen-
eration system is found in the pastoral hilly zone [6–10]. But this new concept which
can be applicable in multi-storage building by designing and selecting suitable PHTG
set for consuming water distributed to the houses. This power is clean, green power,
and less operating cost. This will be the best method for generation of electricity
in domestic basis to meet the power demand with less operating cost without any
pollution, which can be called as green power [5, 11–14] This can be implemented
in all multi-storage building as mandatory to harvest the electrical power from freely
available potential energy.
In this method, the maximum power can be generated up to 10 kW in every multi-
storage building, since the water source will be available always in the overhead tank,
and also it is compulsory to maintain the water level in overhead tank for day-to-day
needs. Therefore, power generation is possible.
energy for rotating the water turbine [3] that is act as a prime mover to the alternator
which produces electricity. This electrical power can be stored in the battery using
the storage system. The following factors determine the generation of power in pico-
hydrogenation scheme. The factors include the availability of water and its flow
rate [2] in the pipelines, head of the pico-hydro scheme, cost, and efficiency of the
scheme.
3 Theoretical Analysis
The input and output of the Turbine Generator (TG) set are given by Eqs. (1) and
(2),
where Pin is the input Power of TG set, Pout be the output power of TG set, H be
the net head in meter, Q be the water flow rate in liter/sec, and g be the gravitational
force.
216 P. Velazhagan et al.
3.2 Head
Head is a measure of falling water from the overhead tank to water turbine blade
through the pipeline. The generated power is dependent on the pressure and flow rate
of the water to downstream. The pressure gauge reading in PSI (Pounds per Square
Inch) is converted to head in meter using Eq. (3),
H 0.704 ∗ P (3)
The water flow rate is the amount of water flowing through the pipe in one second
(liter/sec). The input power is calculated by measuring the head in meter and water
flow rate [4] in liter/sec with consideration of gravity (9.81 m/s2 ). The amount of
water which flows in one second can be measured by using particular capacity or
volume of a bucket.
3.4 Efficiency
In the proposed pico-hydro system, 30% of the total hydro water power loss occurs
due to mechanical losses and 20% of the total hydro water power loss occurs due
to hitting the turbine blades [2] while converting mechanical power into electrical
power. Considering these two losses, the efficiency [4] to estimate the potential
output power is normally 50%. The efficiency of the pipeline is highly depending on
material, length, and diameter of the pipe. The larger the pipelines diameter, the less
friction occurred and more power can be delivered to the turbine but the cost will be
more expensive.
Prospect of Pico-Hydro Electric Power Generation Scheme … 217
The setup has Pelton turbine, rotating armature AC generator. The turbine blades
are shown in Fig. 3. It has been highlighted that Pelton turbine is commonly used
in a small-scale hydropower system particularly in pico-hydro system [5] due to its
suitability. The turbine type is selected based on the speed range and power capacity
of the alternator to be used.
The rotating armature AC generator is shown in Fig. 4. The pico-hydro scheme uses
[3] permanent magnet AC generator because it is cheaper, smaller in size. Due to use
of the permanent magnet in field copper loss is neglected which in turn increases the
efficiency.
The energy generated from the turbine generator will vary depending on the consump-
tion of water by the houses of the multi-storage building. Therefore, the generated
energy is required to be stored [5] in the battery. The proposed energy storage system
is shown in Fig. 5. The energy storage system uses 12 V battery to store the energy
with the help of boost converter.
The boost converter will act as an automatic constant voltage regulator to the
battery for effective charging even if the generated voltage is less than the set reference
voltage. The microcontroller is used to monitor and control the boost converter
output voltage and the generated voltage of PHTG set. These stored DC supply can
be used directly by connecting LED light luminaries, during block out condition
[8] particularly in night time at exit place. The same power can also be used for
communication devices like charging device for mobile phones, laptop etc. The
developed hardware photography is shown in Fig. 6.
From Table 2, it can be seen that the output power depends on the water flow rate and
head but in the proposed prototype model PHTG, the head is fixed at 3 m. The output
power [10–12] will increase with an increase in either head or water flow rate. The
battery voltage 12 V is controlled by using charge controller unit. By increasing the
floor level of building accordingly, the head will increase which in turn increases the
water flow rate and pressure; both will act on the turbine and increase the speed of
the generator set which helps to produce high power. As per the data obtained from
any multi-storage building, selection of suitable PHTG set is possible. Both AC and
DC electrical power can be generated from PHTG set.
6 Conclusion
This paper proposes the generation of electric power using pico-hydro generation sys-
tem in the residential multi-storage building. Every residential multi-storage building
has the overhead water tank, which contains potential energy. The potential energy
Prospect of Pico-Hydro Electric Power Generation Scheme … 221
is converted into kinetic energy whenever the water is used by residents. This kinetic
energy is used for generating electric power and stored in the battery. The proposed
system utilizes the power generation of 2.45, 3.67, and 4.90 W for the water flow
rate of 0.166, 0.25, and 0.33 L/s. This proposed green energy system will reduce
our power demands in every multi-storage building. The proposed work has been
extended by increased power output and increases the number of blades on the tur-
bine, number of turns in the coil, and head.
References
Abstract Generally, the power fluctuations in the distribution system and its per-
centage of occurrence will be higher in one or two phases compared to all the three
phases. Thus, in any commercial or domestic power supply system where three
phases are available, it is advisable to have an automatic changeover to the healthier
phase for the single-phase critical loads. A microcontroller-based intelligent circuit is
designed by using embedded system technology, to monitor and compare the voltage,
frequency, and current with the predefined reference values. A single-phase critical
load will be operational in any one of the phases. If the parameters monitored fall
below or above the reference value in the loaded phase, then the load will automati-
cally switch over to the next available healthier phase. A fault message will be sent
to the intended recipients through SMS using GSM technology. An LCD display
will show all the values including healthier also. Provision for an inverter backup
is provided to give continuity of power to the critical load if in case all the three
phases have fluctuations or power failures. Options are provided to control the phase
changer by an authorized person through SMS also. Further, the under or overvoltage
can be automatically controlled to a stable reference voltage level by means of SMS.
1 Introduction
The stability problems in power and failure of phases are the threats to the growth
of the economy in developing countries like India. If stability is not achieved, then
R. Harikrishnan (B)
Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune 412115,
India
e-mail: rhareish@gmail.com
P. Sivagami
School of Electrical and Electronics Engineering, Sathyabama Institute of Science and
Technology, Chennai 600119, India
e-mail: sivagamitec@gmail.com
it means the development is poor. Overcoming the above factors boost the economy
of a country.
The public power supply feed industries, commercial, and domestic customers.
During power generation, transmission or distribution there is a possibility for the
occurrence of total power failure, an imbalance in phases or any other technical
problem. In order to overcome the above factors, it is necessary for automation
of phase changing during phase failure. This is done to safeguard the consumer
appliances against these instability factors.
Most of the equipment operated by manufacturing companies requires single-
phase supply. The threats to the supply are unbalanced voltages, overloads, and
undervoltages. In such cases, the stability is maintained by phase change over. It
is usually done manually. The processing time for the manual process is more. The
delay in the process may cause serious damages to machines and even to the products.
This problem is overcome by automatic phase switching system. Due to the technical
advancement, there are different ways in which automatic phase switching system
can resolve the various problems especially in developing country like India.
Furthermore, this paper presents an automatic phase changeover switch. Many
designs of prototype systems are available which perform equivalent to manual three-
phase selector switch. But the prototype developed in this paper is an automatic
phase selector which is designed for only three-phase AC input power to single-
phase output applications. This paper employs a microcontroller. It is used to store,
process, and change the data according to the user requirement. This is possible
because microcontroller has CPU, memory, I/O ports, timer/counters, ADC/DAC,
serial ports, interrupt logic, oscillator circuitry, and many more functional blocks on
the single chip. Hence, it reduces the cost of hardware. The microcontroller used is
PIC18F458.
An asynchronous sampling technique and compensation of phase shift help in
improving the accuracy of microprocessor-based measuring apparatus for measuring
voltage, current, and electric power [1]. The noise, arching, wear, and tear associated
with electromechanical relays are eliminated by a solid-state relay (SSR). The control
system components are reduced by means of employing digital integrated circuits and
microcontroller. Further the speed of the system is improved. The system is provided
with automatic phase selector, overvoltage, and undervoltage level monitoring, alarm
system, and LCD display [2]. In order to protect the electric and electronic device
against problems such as voltage drop, overshoot, and fluctuation, the main supply
employs a microcontroller-based circuit. The continuous monitoring is done to find
the stability of voltage. If a problem arises due to fall or rise of voltage the feeding
board is disconnected from the main supply. Once it becomes stable, the feeding
board is reconnected to the main supply [3]. The switching between the solar, utility
grid, and the diesel generator is achieved using a transformer, rectifier, regulator,
comparator, and relays. The high-speed electronics devices reduce switching time
[4]. By employing relay comparator and transformer the equipment are made to run
properly; if the voltage is low in any of the phases, then the voltage is corrected to
overcome the situation [5].
GSM Entrenched Intelligent Phase Changer 225
2 Proposed System
The three-phase input is connected to the potential transformer and followed through
current transformer where voltage and current will be measured, respectively. Both
the measured values are smoothened in the signal conditioner and fed to the micro-
controller through input ports. Separate control transformer with a voltage regulator
is used to regulate the auxiliary power supply to microcontroller and other func-
tioning circuits used in the kit. The voltage reference is connected to zero crossing
detector to calculate the signal frequency. A GSM modem is connected with the
microcontroller through transmitter–receiver ports to send and receive messages.
A seven-segment LCD display is connected through PORT D of the microcontroller,
where all the measured parameters are displayed. A relay driver with four relays con-
nected with microcontroller through the output port, which controls the operation
of the load. This is an intelligent phase changer system using Microcontroller and
226 R. Harikrishnan and P. Sivagami
R
Y
B
N
C
T R Phase
PT PT PT Relay
C
T Y Phase
ADC & ZCD Relay
LC Micro Controller C
D PIC 18F458 T B Phase
Relay
Relay Driver
GSM
UPS UPS
Load
Relay
RPS
GSM technology, which keeps monitoring all the three-phase parameters continu-
ously, and compares with the reference values which was set as default. Whenever
there is a variation in incoming power parameters an error will be produced in the
controller. The kit will be initialized with the startup of the device, by default the kit
will start in auto mode.
4 Simulation Results
This system was simulated using the MATLAB software, and faults like undervolt-
age, overvoltage, under frequency, and over frequency were simulated and results
were verified. The simulation circuit of the GSM-based intelligent phase changer
is shown in Fig. 2. X-axis in the figure represents on/off condition and Y-axis rep-
resents time. Voltage faults can be created in the simulation by connecting single-
phase impedance in the source, which will, creates the short-circuit current flow and
reduces the voltage drastically. When the voltage dragged below the set point, the
GSM Entrenched Intelligent Phase Changer 227
system will turn the load to other healthier phase, which can be verified by output
scope. As shown in Fig. 3 that is under the no-fault condition, the system is fault
free where all the phases are healthy the load is connected in the R phase and the
resultant graph with output ON in R phase shown in Fig. 4. As shown in Fig. 5 when
a voltage intercepted in R phase, the load is switched over to next available healthy
phase and the resultant graph with output ON in Y phase shown in Fig. 6. As shown
in Fig. 7 when a voltage intercepted in R, Y phases the load is switched over to B
phase and the resultant graph with output ON in B phase shown in Fig. 8. As shown
in Fig. 9 when voltage intercepted in all the three phases, the load is switched over
to the inverter and the resultant graph with output ON in inverter is shown in Fig. 10.
Using MATLAB software, different fault conditions created and the phase changer
operation is checked with each of them. When all the phases available the load was
connected to R phase and subsequently, the phase was changed to other phases when
the single-phase faults created. Faults in the grid are created using single phase,
which reduces the voltage below the set value when turned on. All the individual
phases are intercepted with the fault and output scope is cross verified. Here, the
inputs are voltage and current to the individual R, Y, B phase. The controller read the
voltage values of individual phases R, Y, B and compare with a reference value. If
all the three-phase voltages were either below the low set value or above the high set
value, the UPS will be engaged. The modulus difference between individual phase
voltage and the base voltage will be calculated. The same will be compared with
the respective phase actual voltages and the lowest of all will be taken in line. This
process will be repeated and cyclic. When any of the phase voltage value is exceeding
the threshold value, automatically, the other phase which is having next lowest value
will be taken in line.
5 Hardware Implementation
The photo of the GSM-based intelligent phase changer is shown in Fig. 11. When the
power-up is done to the kit, green color LED in the GSM will start to blink at a faster
rate and it slowly settles to blink once in three seconds, which indicates the signal
communication is healthier and the system was fully initialized. Power on LED in
the controller board and regulated power supply boards ensures that auxiliary power
to the controller is functioning. A reset button placed next to the microcontroller is
used as a system reset. During the system startup, LCD display shows the initializing
stages of the kit and once system normalized it will start displaying the individual
phase parameters in a cyclic mode. Each screen space will show the actual values
and their differences with the set values in percentage.
228 R. Harikrishnan and P. Sivagami
L
P C o
Relay
T a
T
d
GSM
Zero crossin
detecor μC
234 R. Harikrishnan and P. Sivagami
6 Conclusion
In the proposed GSM-based intelligent phase changer, both auto and manual oper-
ations were performed, and the results are at the satisfactory level. In auto mode,
the unit continuously monitors all the three-phase parameters and action is initiated
based on the incoming voltage variations. The response time for the changeover from
weaker phase to the healthier phase is 2–3 s. If a problem occurs in all the three-
phase, a backup support is provided by the UPS. Its rating is selected according to the
critical load demand. In manual mode, the kit operation was performed by sending
SMS using GSM technology. Other power supply parameters like frequency, PF, and
load current also measured by the kit and displayed.
References
1. Lai MF, Wu YP, Hsieh GC, Lin JL (1995) Design and implementation of a microprocessor-
based intelligent electronic meter. In: Conference paper IEEE Xplore, pp 268–272
2. Chukwubuikem NM, Ekene MS, Godwin U (2012) A cost effective approach to implementing
change over system. Acad Res Int 2:62–72
3. Alnaham MY, Suliman MM (2015) Microcontroller-based system for voltage monitoring,
protection and recovery using proteus VSM software. Int J Comput Appl 118:1–5
4. Gupta AK, Singh C, Singh G, Kumar A (2015) Automatic cost effective phase selector. Int J
Adv Res Electr Electron Instrum Eng (IJAREEIE) 4:3919–3924
5. Patel L, Sonawane S, Thakur N, Nagare K (2016) Automatic phase selector using micro-
controller 89C52. Int Res J Eng Technol (IRJET) 3:2595–2599
6. Ganjewar SP, Kashid SG, Dahde SS, Kalubarme PP (2016) Micro controller based 3Ø selector
and preventer for industrial appliances. IJREE-Int J Res Electr Eng 3:43–46
7. Heda L, Bhutada P, Thakur R, Bhattad P, Singh V (2016) Fault monitoring and protection
of three phase devices. Int J Innov Res Electr, Electron, Instrum Control Eng (IJIREEICE)
4:pp. 208–210
8. Salunkhe AM, Jagtap RR (2017) PIC based Frequency and RMS value measurement. Int J
Innov Res Sci Technol 3:55–61
9. Ajith Kumar V, Deepak PR, Fayis Muhammed TV, Vishnu R (2017) Automatic three phase
selector with Power factor Improvement. Int Res J Eng Technol (IRJET) 4:2651–2653
10. Salunkhe P, Raut D, Patil M, Salunkhe P, Yusuf M (2017) Automatic phase selector using
microcontroller AT89C51. Int J Electr Electron Eng 9:720–724
Design and Performance Analysis
of Current Starved Voltage Controlled
Oscillator
Abstract In this paper, current starved voltage controlled oscillators (CSVCO) using
CMOS 180 nm technology are designed and their performances are evaluated. Then,
a comparative study of different topologies of CSVCO like five-stage and seven-
stage CSVCO is performed on the basis of power consumption, phase noise, center
frequency, and tuning range. The simulation results reveal the better performance of
the proposed design as compared to existing current starved VCO in terms of phase
noise and power consumption.
U. Nanda (B)
Department of Electronics and Communication Engineering, Vellore Institute of Technology,
Amaravati 522237, India
e-mail: uk_nanda@yahoo.co.in
D. Nayak · S. M. Biswal
Department of Applied Electronics and Instrumentation, Silicon Institute of Technology,
Bhubaneswar 751024, India
e-mail: dnayak@silicon.ac.in
S. M. Biswal
e-mail: sudhansu.mohan@silicon.ac.in
S. K. Pattnaik · S. K. Swain
Department of Electronics and Communication Engineering, Silicon Institute of Technology,
Bhubaneswar 751024, India
e-mail: sushanta.pattnaik@silicon.ac.in
S. K. Swain
e-mail: sswain@silicon.ac.in
B. Biswal
Department of Electronics and Communication Engineering, Gayatri Vidya Parishad College of
Engineering, Vishakhapatnam 530048, India
e-mail: birendra_biswal1@yahoo.co.in
1 Introduction
VCO is one of the basic and integral circuit elements in analog and digital circuits [1].
The applied input voltage determines the instantaneous oscillation frequency. Hence,
there exists a linear relationship between output frequency and input applied voltage
[2]. The use of VCO is really useful in a circuit because its oscillation frequency can
be altered as desired. It is the paramount block of phase-locked loop (PLL) in which
the oscillator’s frequency can be locked to the frequency of another oscillator. It is
also used in transceivers, frequency synthesizers, clock generator circuits, etc.
There are basically three types of VCOs, those are current starved VCO (CSVCO),
differential VCO, and LC-VCO. Among them, the most commonly used VCO is the
CSVCO as its structure involves a ring oscillator which is easily integrable, more
compatible with CMOS technology and it consumes less die area [3, 4]. The ring
oscillator contains an odd number of NOT gates or inverters that are attached to a
chain and the output of the last inverter is fed back into first. Hence, this configuration
of feedback can be used as a storage element and are usually employed in frequency
synthesizers and PLL circuits [5, 6]. The output frequency of a ring VCO is the
function of a number of stages used and the delay of each cell. The VCO must be
designed wisely and judiciously as there are a number of requirements such as high
gain factor, minimum phase noise, low power consumption, better tuning range,
high-frequency linearity, etc., that should be met. Therefore, by careful design and
experimentation, a good VCO design can be developed.
This paper basically includes six sections. Section 2 provides elaboration for basic
VCO used. Section 3 demonstrates the implementation and depicts the results of five-
stage VCO. Design and verification of the seven-stage VCO circuit are demonstrated
in Sect. 4. Section 5 depicts the comparison made between the proposed circuit and
other existing circuits. Section 6 ultimately draws the conclusions.
where Aa (ω) is the transfer function of the amplifier and Af (ω) is the transfer function
of the feedback unit. As per the Barkhausen’s criterion, an oscillator must satisfy
these two conditions:
Design and Performance Analysis of Current Starved Voltage … 237
For stable oscillations, first, the loop gain of the whole circuit must be always
greater than unity and second, the phase angle of the circuit must be 0◦ or 360◦ . Ring
oscillators (RO) are having a chain structure in which the inverters are connected
in back-to-back structure [8]. In case of a single-ended ring oscillator, we use odd
number of stages to satisfy Barkhausen’s criterion. The initial block diagram of
single-ended ring oscillator having five stages is shown in Fig. 2.
The frequency of this oscillator can be calculated as
1
f osc Hz (4)
2N Td
where N is a number of inverter stages and Td is the average time delay of each
inverter.
3 Five-Stage CSVCO
A voltage controlled ring oscillator [9–11] with five-stage current starved delay cell
is shown in Fig. 3. From the schematic, we can see that there is a combination of
PMOS and NMOS in each delay cell.
For example, MOSFETs M2 and M3 operate like an inverter, while Ml and M4 are
current sources to limit the current available to the inverter. It can also be thought of
as the inverter is starved for current. The drain currents of M21 and M22 are equal and
determined by the input control voltage. The currents in M21 and M22 are reflected
in each inverter stage.
238 U. Nanda et al.
It is required that the VCO must consume low power, hence a low oscillation
frequency must be set. The output waveform of the five-stage voltage controlled
oscillator is represented in Fig. 4. This output frequency is captured at an input
control voltage of 0.9 V. To convert an oscillator into voltage controlled oscillator
its frequency must be dependent on some other factor, so that the delay of each cell
can be varied easily. This is done by having a control voltage and also by providing
a MOSFET after each stage which will exhibit a valid and good characteristic of a
voltage controlled oscillator. A MOS-based voltage controlled oscillator has a vital
function in controlling the frequency of each delay cell.
In order to design a good CSVCO, the design parameters should be considered
and adjusted in a proper manner. The specifications for designing the layout of VCO
are specified in Table 1. The layout of the five-stage CSVCO is designed and is shown
in Fig. 5.
The output waveform is obtained after varying the input control voltage in steps
of 0.5 V. The output frequency of different waveforms is obtained by varying the
input control voltage, calculated, and tabulated in Table 2. The tuning range of the
five-stage CSVCO was calculated as 85.3% and it is shown in Fig. 6. The rapid
short-term variation in the frequency domain representation of the wave is defined
as phase noise [12–14] and these are caused due to instability in the time domain
Table 2 Frequency, power consumption, phase noise variation with respect to input control voltage
Voltage (V) Frequency (GHz) Power (mW) Phase noise (dBc/Hz)
0.5 0.0120 0.0832 −85.530
0.9 1.825 1.278 −85.373
1.5 2.265 1.92 −85.360
1.8 2.31 1.98 −84.990
which is known as jitter. It causes degradation in the output signal, hence it should
be minimized.
The phase noise variations for different offset frequencies are plotted in Fig. 7.
Here, the phase noise at 1 MHz offset frequency is calculated to be −81.126 dBc/Hz.
242 U. Nanda et al.
4 Seven-Stage CSVCO
In today’s era, the ring oscillators are popularly used because of its easy operation,
simple structure, and low fabrication cost [15–18]. Here, a study of the ring oscil-
lator with an extra delay cell is carried out. Due to this new delay cell, the circuit’s
tuning range is improved and the power consumption is reduced. Figure 8 shows the
schematic of seven-stage CSVCO. In this designed VCO, seven inverter stages are
cascaded.
The output waveform of the seven-stage VCO at 0.9 V of the control voltage is
represented in Fig. 9. The layout of the seven-stage current starved VCO is designed
with the specifications given in Table 1 and it is shown in Fig. 10. The phase noise
curve is presented in Fig. 11.
The variation of the frequency range of the proposed current starved delay cell-
based VCO for different control voltages is represented graphically in Fig. 12. It is
showing a positive linearity and its value is most linear in the middle of the curve.
Hence, this type of ring VCO can be used in applications where an increased linearity
is required. Simulating the VCO for different control voltages, the performance
parameters are obtained and as shown in Table 3.
The proposed circuit of seven-stage CSVCO has been compared with the five-stage
CSVCO and also with previously published papers on the basis of various parameters
which is shown in Table 4.
Design and Performance Analysis of Current Starved Voltage … 245
6 Conclusion
This paper describes the architectural analysis of five-stage and seven-stage CSVCO
along with their performance parameters. It is observed that the phase noise of seven-
stage is better compared to five-stage CSVCO, while the power consumption of
five-stage VCO is less (1.278 mW) as compared to seven-stage, i.e., (1.28 mW).
Furthermore, the circuit is applicable for a low supply voltage because of its simple
structure. Design of a modified CSVCO which will provide better results in terms of
tuning range, phase noise, power consumption, and layout area can be performed.
A comparison result between the new designed CSVCO and recent architecture can
be demonstrated.
References
1. Hajimiri A, Lee TH (1999) The design of low noise oscillator. Kluwer Academic Publishers
2. Kang SM, Leblebici Y CMOS digital integrated circuits: analysis and design, 3rd edn. McGraw-
Hill Publication
3. William Shing TY, Luong HC (2001) A 900-MHz CMOS low-phase-noise voltage-controlled
ring oscillator. IEEE Trans Circ Syst II Analog Digital Signal Process 48:216–221
4. Rout PK, Acharya DP, Nanda U (2018) Advances in analog integrated circuit optimization: a
survey. In: Applied optimization methodologies in manufacturing systems. IGI Global, USA,
pp 309–333
5. Nanda U, Acharya DP (2017) An efficient technique for low power fast locking PLL operating in
minimized dead zone condition. In: International conference on devices for integrated circuits,
pp 396–400, 23–24 Mar 2017, Kalyani, India
6. Nanda U, Acharya DP, Patra SK (2013) Design of a low noise PLL for GSM application. In:
International conference on circuits, controls and communications (CCUBE), pp 1–5, 27–28
Dec 2013, Bangalore, India
246 U. Nanda et al.
7. Bhardwaj M, Pandey S (2015) Design and performance analysis of wideband CMOS voltage
controlled ring oscillator. In: 2nd international conference on electronics and communication
systems (ICECS)
8. Saeidi B et al (2010) A wide-range VCO with optimum temperature adaptive tuning. In: Radio
frequency integrated circuits symposium (RFIC), 2010. IEEE
9. Caruso G, Macchiarella A (2008) Low power design of delay interpolating VCO. In: ICSES,
pp 129–132, Sept 2008
10. Duster JS, Kornegay KT (2004) A comparative study of MOS VCOs for low voltage high
performance operation. In: Proceedings of international symposium on low power electronics
and design, pp 244–247
11. Lu LH, Hsieh HH (2006) A widetuning-range CMOS VCO with a differential tunable active
inductor. IEEE Trans Microw Theor Tech
12. Nanda U, Acharya DP (2017) Adaptive PFD selection technique for low noise and fast PLL in
multi-standard radios. Microelectr J. Elsevier, 64:92–98
13. Nanda U (2016) A novel error detection strategy for a low power low noise all-digital phase
locked loop. J Low Power Electron, Am Sci Publ 12:30–34
14. Nanda U, Acharya DP, Patra SK (2014) Low noise and fast locking PLL using a variable delay
element in the phase frequency detector. J Low Power Electron Am Sci Publ 10:53–57
15. Lee SY, Amakawa S (2010) Low-phase-noise wide-frequency-range differential ring-VCO
with non-integral sub harmonic locking in 0.18 μm CMOS, 27–28 Sept 2010
16. Kang S, Leblebici Y (2003) CMOS digital integrated circuits: analysis and design. Tata
McGraw-Hill Edition
17. Lee TH, Hajimiri A (2000) Oscillator phase noise: a tutorial. IEEE J Solid-State Circ
35:326–336
18. Kinger B et al (2015) Design of improved performance voltage controlled ring oscillator. In:
Fifth international conference on advanced computing & communication technologies (ACCT)
19. Panda M, Mal AK (2015) Design and performance analysis of voltage controlled oscillator
in CMOS technology. In: International conference on signal processing and communication
(ICSC)
20. Mishra A, Sharma GK (2015) Performance analysis of current starved VCO in 180 nm. In:
2015 annual IEEE India conference (INDICON). IEEE
Estimation of Water Contents
from Vegetation Using Hyperspectral
Indices
Abstract This paper outlines the research objectives to investigate the approaches
for assessment of vegetation water contents using hyperspectral remote sensing and
moisture sensor. Water contents of crops monitor crop health for precision farm-
ing and monitoring. In the present research, spectral indices with some chemical
extraction procedures were identified for estimation of water contents of crops. The
investigated crop species, namely Vigna Radiata, Vigna Mungo, Pearl Millet, and
Sorghum were collected from Aurangabad region of Maharashtra, India. Spectral
reflectance curve of crop growth patterns was measured using ASD field Spec 4
Spectroradiometer and 150 Soil moisture sensor including healthy, diseased, and dry
leaves with standard laboratory environment. It is found that there was a positive
correlation between WI and Soil moisture sensor with 0.99, 0.76, and 0.97 accuracy.
The research work was implemented using Python open source software. In the con-
clusion, water estimation from crops may be useful in irrigation mapping and drought
risk modeling.
1 Introduction
Estimation of water stress level of vegetation plays the vital role in precision farming
and agricultural monitoring system. Crop water stress monitoring maintains crop
health for irrigation of crop scheduling process [1]. Large-scale spatial areas are
not feasible for mapping and assessing field measurements with limited samples. To
overcome such pitfalls, remote sensing techniques offer alternatives to nondestructive
methods. The environmental study recognized that lack of water availability impacts
on plant growth process with a decrease in crop productivity and field [2]. Water
bands absorb reflected radiant energy through Short-Wave Infrared (SWIR) region
ranging from 1.3 to 2.7 μm and some of the bands centered at near-infrared from 0.72
to 1.3 μm. The literature [3] reveals the details about the complete absorption peak
at 0.9 and 0.97 μm wavebands for analysis. The remote sensing technology offers
the measurements of leaf and its canopies for estimation of water contents from the
leaves. Reflectance spectra of vegetation’s collected from the field investigation pro-
duce the somewhat lacuna due to canopy coverage, whereas it provides the minute
details about water stress measured reflectance spectra at controlled laboratory envi-
ronment. Field investigation comes with canopy spectral response with some lacuna
for water stress estimation alternatively laboratory measurement on single leaves
with standard environment provides instantaneous effects [4]. The spectral response
curve varies as per the crop leaves from thin to thick physiological parameters. The
leaf water contents matching faces challenge by a varying leaves physiological struc-
ture of cells, its thickness reflectance affects plant to plant. The literature assumed
that chlorophyll contents of leaves may discriminate crop growth stages but chloro-
phyll stress pattern depends on the availability of water contents. Particularly, the
wavelength range from 1530 to 1720 seems to be more appropriate for water estima-
tion [5]. In the SWIR wavelength region, 1400–2500 nm, measurements have shown
considerable changes to this section of the scale resulting from changes in the water
content of plants leaves [6].
The paper reports on an experimental method to acquire reflectance signature of
crop species and for analysis of crop growth stages water stress level. This paper
contains four sections including an introduction to background knowledge of crops
and significance of the promoted research. Section 2 contains details about study
sites and data collection methods. All comparative discussion about results with the
analysis is in Sect. 3. Finally, the conclusions are summarized in the last section.
Estimation of Water Contents from Vegetation Using … 249
Fig. 1 Methodology of
water contents estimation
study
2 Experimental Setup
This section enlights location of the study area with instrument details with the
characteristics of non-imaging hyperspectral instrument.
All the experimental data selected for crop study have been obtained during the
growing season of summer and winter 2017 at Aurangabad region (19054 3.7944 N
latitude and 75021 8.9208 E) Maharashtra, India [7]. It covers annual precipitation
725.8 mm, means annual temperature 17–33 °C. Four main types of crop species
were selected as objectives of investigation planted using black soil. A total of 30
samples each species were measured using ASD Field Spec 4 Spectroradiometer.
The four species had collected 480 spectral responses for analysis (namely, Vigna
Radiata, Pearl Millet, Sorghum, and Vigna Mungo). Figure 1 shows the proposed
work of succeeding section for the estimation of water contents.
Spectral signatures were taken using Field Spec 4 (Analytical Spectral Devices, Boul-
der, CO, USA) high-resolution field portable spectroradiometer with spectral range
located in the VIS, NIR, and SWIR regions (350–2500 nm). Instrument sampling
intervals of ASD are (1.4 and 2 nm) with the 1 nm linear spacing interval. All the spec-
tral responses were collected under the 450° and Field of View (FOV) lamp angle.
Tungsten halogen quartz lamp with 1,000 W under the standard darkroom controlled
condition. White reference panel measurements were collected to the standard-
ized instrument and calibrate for database collection. The spectral responses were
250 R. R. Surase et al.
Fig. 2 Spectral response curve of Vigna Radiata crop with four growth stages captured during the
2017 experiment setup
collected throughout experiment between 11:30 a.m. and 2:00 p.m. to avoid bidirec-
tional reflections. Each reflectance curve was measured as an average of 10 spectral
measurements with slightly varying locations of samples [8]. The 8° FOV along with
fiber optic cable is used for spectra collection using RS3 which is inbuilt software
tool calibrated with the instrument. The following Fig. 2 provides the leaf samples
and generated spectral response curve ranging from 350 to 2500 nm using spectrora-
diometer in a closed indoor environment. The spectra were collected in crop growth
stages including healthy leaves, diseased leaves, and dry leaves for water contents
estimation. The X-axis represents wavelength spectrum and the Y-axis represents
reflectance of observed samples.
The SM 150 soil moisture sensor measures moisture available in soil but we had
tried to place the tip of the sensor within leaves for measurement of water contents.
The instrument is wrapped with the plastic body and attached to two sensor tips
with the gun for measuring display. The output reading was internally captured in
DC voltage and then converted to readable units for user [9]. The following Fig. 3
signifies samples collection procedure of 150 soil moisture sensor using crop leaves.
This section contains filtering algorithm to remove noise from collected spectral
response curve and analysis techniques.
Estimation of Water Contents from Vegetation Using … 251
Moisture Sensor
Leaf Samples
1
Savitzky Golay Filter Cn Si + n (1)
n
Remote sensing research works with the bunch of spectral indices based on objectives
of researchers. Water index is one of the well-calibrated algorithms for estimation of
water contents of crop samples [11]. WI works with the spectrum ranging in SWIR
region including reflectance of 900 and reflectance of 970 nm wavelengths.
252 R. R. Surase et al.
Analysis of variance (ANOVA) method applies for testing where the hypothesis
founds to be there is no difference between two or more population means considering
spectral response value and soil moisture readings. ANOVA test works, if there is no
difference in a number of treatments of samples [12]. ANOVA test assumes P value
significant for more accurate results.
I
(Xi − X )2
SS 2 n (3)
i1
I −1
where N is a number
of pairs of samples, the sum of products of paired scores
with
two categories, X is the sum of x scores, Y , and is the sum of Y scores, Z
as per healthy, diseased, and dry leaves in Eq. (4).
Four crops were selected for study including Vigna Radiata, Vigna Mungo, Pearl
Millet, and Sorghum for estimation of water contents with their families. Data were
analyzed in two aspects for a comparative study of spectroradiometer and moisture
sensor. Table 1 provides statistically measured for crops with varying stages including
healthy leaves, diseased leaves, and dry leaves samples. Preprocessed reflectance
Estimation of Water Contents from Vegetation Using … 253
Table 1 Coefficient of correlation between spectral response curve and 150 soil moisture sensor
for four crops
Crop Family ASD field spec 4 Moisture sensor ANOVA
species (SS)
Average of 10 samples
HL DL Dry L HL DL Dry L α 0.05
Pearl Poaceae 1.023 0.969 0.612 1.02 0.96 0.61 0.197
Millet
Sorghum Gramineae 1.012 0.943 0.679 1.01 0.93 0.68 0.15
Vigna Fabaceae 1.11 0.979 0.720 1.10 0.96 0.70 0.16
Radiata
Vigna Fabaceae 1.131 0.98 0.707 1.11 0.99 0.69 0.18
Mungo
spectra were considered for further analysis with average of 10 samples of each crop
species within 3 categories for calibration and one-way ANOVA followed by a sum
of squares prediction, where alpha = 0.05, alpha with 0.05 works well for determining
the differences according to null hypothesis with respect to spectrum population in
the form of bands. Herewith varying alpha values tried for implementing ANOVA but
only 0.05 is stated as significant for sum of residual (SS). The ANOVA for healthy
leaves ranging from 1.01 to 1.131 based on spectroradiometer and 1.02 to 1.11 using
moisture detection sensors. The result shows for diseased leaves indicates from 0.943
to 0.98 and 0.93 to 0.99, respectively. Dry leaves resultant values were ranging from
0.612 to 0.720 and for soil moisture sensor 0.61 to 0.70. The sum of squares (SS)
in ANOVA consists of 0.197, 0.15, 0.16, and 0.18 to analysis crop identification.
Average values signify that diseased leaves range within 0.61 to 0.702 consist water
level stress as per the diseases.
Table 2 shows R2 , root means square error (RMSE) of spectroradiometer and 150
soil moisture sensor with the strong positive correlation between HL, i.e., 0.99, DL,
i.e., 0.76, and Dry leaves with 0.97.
As per the objectives of our research, the correlation signifies that both of pro-
cedure meets the positive results. HL was correlated better with accuracy 0.99, dry
leaves give 0.97 accuracy, and disease leaves give less because of disease variation
affects water stress level with 0.76 accuracies. This research analyzes that; soil mois-
ture sensor also provides details of water level from leaves with accurate readings.
254 R. R. Surase et al.
Hyperspectral reflectance data (350–2500 nm) for four different crop species demon-
strated significant responses of crops reflectance characteristics with growth condi-
tions. Temporal data was also collected using 150 soil moisture sensor. Current
research signifies that crops water contents were estimated using soil moisture sensor
compared with a spectroradiometer. Water content estimation helps to monitor crop
stress for precision farming and monitoring. This research also signifies that SWIR
alone can manage water stress level with leaves reflectance. The overall estimates had
shown HL with accuracy 99%, dry leaves accuracy 0.97%, and disease leaves accu-
racy with 0.76%. The research work is implemented using Python software. Future
directions for current research will be correlation study of photosynthetic pigments
with water contents based on spectral response curves and chemical extraction pro-
cess will be validated for more analysis to minimize time and cost using remote
sensing approach.
Acknowledgements Authors would like to acknowledge particularly for providing partial and
technical support UGC SAP (II) DRS Phase-II, DST-FIST, and NISA to Department of Computer
Science and IT, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, Maharashtra, India
and also thank for financial assistance under UGC-BSR research fellowship for this work. The author
would also like to acknowledge Department of Physics, for providing lab facility for use of soil
moisture sensor Dr. B.A.M.U. Aurangabad.
References
1. Gao BC (1996) NDWI—a normalized difference water index for remote sensing of vegetation
liquid water from space. Remote Sens Environ 58:257–266
2. Maimaitiyiming M, Ghulam A, Bozzolo A, Wilkins JL, Kwasniewski MT (2017) Early detec-
tion of plant physiological responses to different levels of water stress using reflectance spec-
troscopy. Remote Sens 9:745. https://doi.org/10.3390/rs9070745
3. Datt B (1999) Remote sensing of water content in eucalyptus leaves. Aust J Bot 47:909–923
4. Thomas JR, Namken LN, Oerther GF, Brown RG (1971) Estimating leaf water content by
reflectance measurements. Agron J 63:845–847
5. Forty T, Baret F (1997) Vegetation water and dry matter contents estimated from top-of-the-
atmosphere reflectance data: a simulation study. Remote Sens Environ 61:34–45
6. Govender M, Dye PJ, Weiersbye IM, Witkowski ETF, Ahmed F (2009) Review of commonly
used remote sensing and ground-based technologies to measure plant water stress. Water SA
35, ISSN 1816-7950
7. Surase RR, Kale KV (2015) Multiple crop classification using various support vector machine
kernel functions. Int J Eng Res Appl 5(1), ISSN: 2248-9622
8. Surase RR, Varpe AB, V. Gaikwad SV, Kale KV (2016) Standard measurement protocol for
ASD field spec 4 spectroradiometer. Int J Comput Appl 887–975
9. User manual for SM 150 soil moisture sensor (2016)
10. Bachko V, Alander J (2010) Preprocessing: smoothing and derivatives, University of Vaasa
Estimation of Water Contents from Vegetation Using … 255
11. Penuelas J, Pinol J, Ogaya R, Filella I (2010) Estimation of plant water concentration by the
reflectance Water Index WI (R900/R970). Int J Remote Sens 18(13)
12. Heron E (2009) Analysis of variance—ANOVA
13. Buxton R (2008) Statistics, machine learning support center
An Alternative Approach for Harmonic
Measurements in Distribution Systems
with Empirical Wavelet Transform
Abstract Addition of a large number of nonlinear loads in the electric power grid
has increased the volume of harmonics, which necessitates its accurate and reliable
measurement for both control and metering purpose. Legacy techniques like FFT
underperform with the present grid scenarios, wherein the frequency varies over
time besides the variations of the harmonic combinations. This paper presents an
alternative adaptive technique, Empirical Wavelet Transform (EWT), for individ-
ual frequency component extraction and measurement of nonstationary signals. The
performance of EWT is evaluated through different test cases depicting typical dis-
tribution grid scenarios, where it is tuned to capture the frequency and magnitude
variations of nonstationary signals, and the results are presented. Similarly, dynamic
changes in frequency and magnitudes of the input signals are introduced and the
time–frequency correlations are analyzed and presented.
1 Introduction
Since its inception, the power grid has been growing and evolving with complex
dynamics that are affected by various factors such as harmonics, swell, sag, tran-
sient surges, frequency fluctuations, and notches. Due to these factors, the quality of
voltage and current in the grid has deteriorated with a severe economic impact on
industrial sector [1, 2]. Moreover, with the extensive use of static power electronics-
based converters, the injection of harmonic currents into the grid is predicted to
increase rapidly in the near future. These harmonics unless restricted may even lead
to catastrophic failure of the grid [3]. Power quality problems are caused by the
regular events and some unpredictable events on both the electric grid end and at the
consumer end, which accounts to be more than 60%. Still, the customer loads gen-
erated power quality problems hold a larger share. Almost the entire customer loads
have become nonlinear in the recent times especially after the LED and CFL lamps
replaced the legacy lightings besides power electronic converters, variable speed
drives, uninterruptible power supplies, laser printers, and computing systems [4].
These nonlinear loads overload neutral conductors and transformers, increase sys-
tem losses, etc. Unless these losses are reduced, it may not be possible to have stable
grid operations in the future. Control of harmonics would be possible only with con-
sumer awareness and cooperation. Penalty for harmonics should be imposed in order
to effectively reduce the proliferation of harmonics. Various standards for harmonics
like IEC 1000-3-2, IEEE 519 are utilized to quantify and contain the harmonic dis-
tortions at the point where there is common coupling of two or more loads. In order
to meet these harmonic standards, consumer installations often utilize harmonic mit-
igation equipment like passive and active filters and reactive power compensation
schemes like Distribution Static Compensators (DSTATCOM), etc. Due to superior
features like gain and frequency adjustment flexibility, lower cost, and size, the active
filters have gained an edge over passive filters. The control algorithms used in active
power filters and reactive power compensators deal with distortion identification
[5–7]. It is a process that extracts the fundamental component of frequency from the
distorted signal. This fundamental component is either used for reference waveform
generation or is separated from the original signal so as to inject the inverse of the
remaining signal into the line to cancel the original load generated harmonics [8,
9]. This yet again necessitates an accurate and reliable harmonic measurement. This
paper proposes a simple yet effective harmonic measurement technique through a
signal processing algorithm named Empirical Wavelet Transforms.
Frequency domain filtering algorithms are generally used for fundamental or any
specific frequency component/s extraction. Fast Fourier Transform (FFT) is widely
used for spectral analysis due to its high-efficiency computational implementation
[10]. Still, these computations assume that the fundamental frequency of the electric
utility is a constant and never drifts from its nominal value. But in the practical power
grid, if the fundamental frequency fluctuates from its nominal value and the FFT
computations in power quality meters do not amend the actual fundamental frequency
variations, then an erroneous measurement will result. It may go to the extent that the
computations would result in some pseudo-harmonics instead of the real harmonics
which further aggravate the deviations. Further, the harmonic in the grid and its
compositions are highly indeterminate and vary from time to time which makes the
An Alternative Approach for Harmonic Measurements … 259
due to its tendency to adapt itself to the analyzed signal, thus leading it to segregate
varied modes of the signal more precisely. To realize the EWT operation on a real-
time signal x(t), first, it is converted as its samples at a chosen sampling frequency
fs. Then, by applying the FFT to the discretized signal x(k), the frequency spectrum
X(ω) is obtained followed by finding the set of maxima points in the obtained spec-
trum. Maxima identification is carried out by means of the frequency and magnitude
distance thresholds and further deducing their equivalent frequency ωn . Here, “n”
is the number of frequency components projected by applying FFT to the original
signal. Thus, the set of frequencies corresponding to maxima is obtained and the
entire Fourier domain spectrum [0, π] is segmented into N segments. Each segment
is demarcated and defined as i [ωn-1 ; ωn ]. Assuming ω0 0 and ωN π, the
V
boundaries ωi are to be obtained, to represent the center of two consecutive maxima.
This will result in the Fourier segments as [0, ω1]; [ω1, ω2], …, [ωN – 1, π] as
presented in Fig. 1.
A transition (changeover from one frequency to the next) phase Tn of thickness
2 τn is to be specified which is made to center around each ωn , to accommodate
the frequency deviation of various frequency components. This gives the distinctive
capability for EWT over FFT to follow the grid frequency variations and account
them for the harmonic calculations. The empirically derived wavelets are set of band-
V
pass filters in each n which can extract any particular frequency pertaining to the
V
defined n . An empirical scaling function unit and an empirical wavelet function
unit are used to build the aforementioned filters. This empirical scaling function unit
and empirical wavelet function unit are given by Eqs. (1) and (2), respectively.
⎧
⎪
⎪ 1 i f |ω| ≤ ωn − τn
⎪
⎪
⎪
⎨ cos π β 1 (|ω| − ω + τ )
n n
∅n (ω) 2 2τn (1)
⎪
⎪
⎪
⎪ i f ω − τ ≤ |ω| ≤ ω + τ
⎪
⎩
n n n n
0 otherwise
An Alternative Approach for Harmonic Measurements … 261
⎧
⎪
⎪ 1 i f ωn + τn ≤ |ω| ≤ ωn+1 − τn+1
⎪
⎪
⎪
⎪ π
⎪
⎪ cos β 2τ1n+1 (|ω| − ωn+1 + τn+1 )
⎪
⎪ 2
⎪
⎨ i f ω − τ ≤ |ω| ≤ ω + τ
n+1 n+1 n+1 n+1
∈n (ω)
⎪ (2)
⎪
⎪ sin π2 β 2τ1n (|ω| − ωn + τn )
⎪
⎪
⎪
⎪
⎪
⎪ i f ωn − τn ≤ |ω| ≤ ωn + τn
⎪
⎪
⎩
0 otherwise
The function β(y) is an arbitrarily formed function. The most used function β(y)
is defined as
and
⎧
⎪
⎪ 1 i f (1 + γ )ωn ≤ |ω| ≤ (1 − γ )ωn+1
⎪
⎪
⎪
⎪ π
⎪
⎪ cos β 2γ ω1 n+1 (|ω| − (1 − γ )ωn+1 )
⎪
⎪ 2
⎪
⎨ i f (1 − γ )ω ≤ |ω| ≤ (1 + γ )ω
n+1 n+1
∈n (ω)
⎪ (5)
⎪
⎪ π
⎪ sin 2 β 2γ ωn (|ω| − (1 − γ )ωn )
1
⎪
⎪
⎪
⎪
⎪
⎪ i f (1 − γ )ωn ≤ |ω| ≤ (1 + γ )ωn
⎪
⎩
0 otherwise
The Fourier spectrum is then segmented into different modes in order to retrieve
each frequency components independently. Each mode is centered around a specific
frequency based on the frequency components of the input signal. The number of
modes N is declared by the user, based on a superficial idea about the components of
the signal to be analyzed. Each mode has a compact support in the form of boundaries.
Thus for N modes, there will be N + 1 boundaries. Each segment should have a center
frequency around which the boundaries will be defined. These boundaries are selected
based on the deviations in any frequency to be accounted, i.e., typical frequency
fluctuation in the grid can be considered for this choice. In order to ascertain the
262 S. Swathika et al.
center frequency, FFT of the given signal is performed and the dominant frequencies
existing in the signal are identified. These dominant frequencies are then used as the
base to compute the local maxima in the foregoing Fourier spectrum in which they
exist. Once the boundaries are formed, the empirical scaling and wavelet functions
are built around these boundaries to form the empirical filter bank as shown in Fig. 2.
Assigned to these sets of filters, now the EWT can be well defined in a way anal-
ogous to the regular wavelet transform. The approximation coefficients are obtained
by performing the inner product of the unit of empirical scaling function with the
applied signal X(ω). The inner product of empirical wavelets results in the detailed
coefficients of the applied signal X(ω). Thus, it is evident that this technique, in fact,
can excerpt information of different frequency components much more accurately
from any multicomponent nonstationary signals like typical electrical signal found
in grids.
4 EWT Algorithm
An important concept when it comes to the EWT algorithm is the “mirroring of the
input signal”. Mirroring is performed on the given signal before doing FFT on it. The
entire length of the signal is divided into smaller sections and they are appended for
say three times on either side of the input signal. The greater the number of sections,
the more accurate the filtering will be. Mirroring is done to extend the signal to deal
with the boundaries. With this, the step-by-step procedure involved in performing
EWT is explained as follows:
Step 1: The real-time signal x(t) is acquired and the number of frequency compo-
nents to be extracted, i.e., N is defined.
Step 2: FFT of the signal X(ω) is computed.
Step 3: The boundary frequencies in the Fourier spectrum are computed and the
frequency axis [0, π] is divided using the boundary frequencies calculated.
Step 4 Empirical wavelet filter banks in the frequency domain are constructed using
these boundary frequencies.
Step 5: Wavelet coefficients are obtained by finding out the correlation between
empirical wavelet filter banks and FFT of the input signal.
Step 6: Inverse Fourier transform is taken to get the output of each filter in the time
domain.
An Alternative Approach for Harmonic Measurements … 263
Three cases involving static power converters and nonlinear loads are considered for
experimenting with the developed EWT for harmonic analysis. These cases are so
chosen that they are common occurrences in a typical electric distribution system.
The test cases considered are liable to be found on the consumer side of a power grid.
The first test signal consists of a sum of sinusoidal signals of different frequencies
and amplitude. Such a signal can be expected in the inputs of industrial and domes-
tic equipment plugged on to ac mains. Also the frequency content of the resultant
complex sinusoid changes over time. Hence, the test signal SIG1 of Fig. 3 is a non-
stationary signal. The signal contains the following components: (i) 100 V, 50 Hz,
(ii) 60 V, 150 Hz, (iii) 40 V, 250 Hz, (iv) 25 V, 350 Hz, and (v) 20 V, 450 Hz. The
components (i) and (ii) are present for the entire simulation time. The components
(iii), (iv) and (v) are added after 0.5 s. Upon processing the signal using EWT, the
different modes of SIG1 along with the time–frequency information of each mode
is obtained and are presented in Fig. 4. The extracted individual voltages are found
to comply with the inputs applied at every frequency. This shows the capability of
the EWT in extraction with a high degree of accuracy. It is worth noticing here that
the time taken for detecting the dynamic frequency changes and identifying them
rightly is extremely small.
Most of the nonlinear loads have their front end as some type of rectifier; thus, an
SCR bridge rectifier feeding a PMDC motor from a three-phase source is considered
as the second case. The source current is polluted with harmonics starting from fifth
to infinity. Initial load torque of the motor is given as 5 N m and after 0.5 s the load
torque is given a step change to 10 N m. The current signal considered for EWT
analysis is presented in Fig. 5, wherein the step change point and the corresponding
rise in current is evident. The result of EWT extraction of Fig. 5 is presented with its
individual frequency components that are shown in Fig. 6. It can be noticed that the
increase in the current at 0.5 s are reflecting in the EWT results also. Thus, it can be
ascertained that EWT accurately captures the change in amplitude of the component
currents upon application of load.
An Alternative Approach for Harmonic Measurements … 265
An inverter feeding a linear load is considered as case 3 as this is yet another equip-
ment commonly found in the distribution system. The inverter is fed from a DC
voltage source of 315 V and feeding an RL load. A momentary change in the output
frequency of the inverter introduced (as shown in Fig. 7) and EWT is performed on
266 S. Swathika et al.
6 Conclusion
Table 1 Comparison between FFT and EWT approaches for harmonic detection
S. no. Test cases THD from FFT THD from EWT Deviation (%)
(%) (%)
1 Case 1 79.54 90.71 12.13
2 Case 2 28.97 32.43 10.67
3 Case 3 40.07 45.12 11.20
References
2. Carpinelli G, Iacovone F, Varilone P, Verde (2003) Single phase voltage source converters:
analytical modelling for harmonic analysis in continuous and discontinuous current conditions.
Int J Power Energy Syst 23(1):37–48 (2003)
3. Rashid MH, Maswood AI (1988) A novel method of harmonic assessment generated by
three-phase AC-DC converters under unbalanced supply conditions. IEEE Trans Ind Appl
24(4):590–597
4. Kim S, Enjeti PN, Packebush P, Pitel IJ (1994) A new approach to improve power factor and
reduce harmonics in a three phase diode rectifier type utility interface. IEEE Trans Ind Appl
30(6):1557–1564
5. Chen D, Xie S (2004) Review of the control strategies applied to active power filters. In:
Proceedings of the IEEE international conference on electric utility deregulation, restructuring
and power technologies (DRPT’04), pp 666–670, Apr 2004
6. Zhou K, Lv Z, Luo A, Liu L (2010) Control strategy of shunt hybrid active power filter in
distribution network containing distributed power. In: Proceedings of the China international
conference on electricity distribution (CICED’10), pp 1–10, Sept 2010
7. Aredes M, Monteiro LFC, Miguel JM (2003) Control strategies for series and shunt active
filters. In: Proceedings of the IEEE Bologna power tech conference, pp 1–6, June 2003
8. Johnson JR (2002) Proper use of active harmonic filters to benefit pulp and paper mills. IEEE
Trans Ind Appl 38(3):719–725
9. Singh B, Al-Haddad K, Chandra A (1999) A review of active filters for power quality improve-
ment. IEEE Trans Ind Electron 46(5):960–971
10. Granados-Lieberman D, Romero-Troncoso RJ, Osornio-Rios RA, Garcia-Perez A, Cabal-
Yepez E (2011) Techniques and methodologies for power quality analysis and disturbances
classification in power systems: a review. IET Gen Trans Distrib 5(4):519–529
11. Bollen M, Yu-HuaaGu I (2006) Signal processing of power quality disturbances. Wiley, Hobo-
ken, NJ, USA
12. Definitions for the Measurement of Electric Quantities under Sinusoidal (2000) Non-sinusoidal,
balanced, or unbalanced conditions. IEEE Stand 1459–2010
13. Hamid EY, Mardiana R, Kawasaki ZI (2002) Method for RMS and power measurements based
on the wavelet packet transform. Proc Inst Electron Eng Sci Meas Technol 149(2):60–66
14. Gilles J (2013) Empirical wavelet transform. IEEE Trans Signal Process 61(16):3999–4010
15. Keerthy P, Maya P, Sindhu MR (2017) An adaptive transient tracking harmonic detection
method for power quality improvement. IEEE Reg 10 Symp (TENSYMP)
16. Maya P, Roopasree K, Soman KP (2015) Discrimination of internal fault current and inrush
current in a power transformer using empirical wavelet transform. Proc Technol 21:514–519
Cellular Automata and Arbiter
PUF-Based Security Architecture
for System-on-Chip Designs
Abstract The need for security against adversary access to the functionality of
the chip has become a prerequisite for almost every product-based company with
increasing reports of invasive, semi-invasive, and noninvasive attacks globally. This
paper proposes a novel digital design that provides authentication-based access to the
functionality of the System on Chip (SoC). Cellular Automata (Rule 45) and Arbiter
PUF have been used as key components in order to attain the goal of providing
security for system on chip. The design has been simulated with the help of Xilinx
Vivado 2016.4v suite, and the detailed structure of the design with a focus on the
key components has been illustrated in a lucid manner. Further, the advantages of
the architecture detailed in this paper prove its sheer novelty and ability in resisting
the adversary access to the functionality of SoC.
1 Introduction
Cellular Automata has long been known for its ability to generate random patterns
[1]. Rule 45 of Cellular Automata, in particular, has a higher degree of randomness
in the generated outputs and could be of great use in applications like security,
unpredictability, cryptography, etc., [2]. The innate mathematical and logical process
responsible for the production of the random sequences by Cellular Automata Rule
45 is simple and has been explained in Sect. 3. Physically Unclonable Function
(PUF), as the name suggests, averts the process of cloning an Integrated Circuit (IC)
and represents the idea that “even with utmost control of the manufacturing process,
S. M. Waseem (B)
IF&S, Kadapa, Andhra Pradesh, India
e-mail: waseem.vlsi@gmail.com
A. Fatima
Mosaic, London, ON, Canada
e-mail: afatima.es@gmail.com
one can never produce two exactly identical chips” [3]. PUFs are often related to
the fingerprints of human, which are used to differentiate one human from other and
could never be the same. In this paper, an Arbiter PUF has been considered as part
of the design to provide the security on the chip by generating the responses to the
applied challenges.
“Security” and “Trust” are two words which come into existence in the field of
System-on-Chip (SoC) design, due to the rapid increase in the cases of intruders,
adversaries, and growing technological advances that provide an ease in availability
of sophisticated tools. With increased globalization, the integrated circuits are always
vulnerable to malevolent alterations in design characteristics or even piracy [4].
Internet of Things (IoT) is supposed to grow at a very high pace and form a web
of estimated 20.4 billion devices by 2020; of which, a prominent percentage of the
connected things being the consumer applications and could probably become an area
of concern with possibility of higher security breach rate [5, 6]. This paper introduces
a security architecture that allows access to the functionality of the SoC only upon
successful authentication. This gives an advantage of private and authorized access
to the device in the era of IoT and proves to be beneficial, both to the user and to the
manufacturer of the devices.
Section 2 gives the brief background of the prior research work carried in the
field of SoC security, Cellular Automata Rule 45, and Arbiter PUF. Section 3 details
the Rule 45 of Cellular Automata. Arbiter PUF and its structure used in the design
are illustrated in Sect. 4. Section 5 discusses the implementation of the security
architecture with the CA Rule 45 and Arbiter PUF as key components and also
details the advantages it provides for attaining the goal of providing security for
system-on-chip designs. Finally, the conclusion is provided in Sect. 6.
2 Background
In [2], Wolfram mentioned the advantages of Rule 45 CA and illustrated its mathe-
matical model in a detailed manner. Also, the use of CA Rule 45 as random sequence
generator and its application in the field of built-in self-test design have been explic-
itly discussed by Waseem and Fatima in [7–9]. In [3] and [10], the authors discussed
the procedures of PUF implementations and their role in chip security. Further, the use
of PUF as a component in providing security and authentication for digital designs
has been discussed in [11–13].
In [14], the authors described a solution based on a PUF device, wherein a user has
issued a device that aids in authentication and cannot be copied or cloned. Further,
the work of authors in [4] gives a clear understanding of the need for security and
discusses in detail the challenges being faced by the research community in design-
ing SoC with security. The two procedures, namely, CA Rule 45 and Arbiter PUF;
though have their unique features and play key role in many applications related to
security and cryptography for system-on-chip designs, yet to the best of our knowl-
Cellular Automata and Arbiter PUF-Based Security Architecture … 273
edge nothing prominent has been reported in the literature about the use of these two
techniques as a combination for providing SoC security.
Many mathematical processes have been reported in the literature that produces
random output patterns, but what differentiates the Rule 45 of Cellular Automata from
others is the ease of implementation along with the diverse and composite behavior
[2]. Rule 45 of CA is characterized by a Boolean Eq. (1) and is best represented with
a graphical representation shown in Fig. 1 [2, 7].
Cn CPi−1 XOR CPi OR NOT CPi+1 (1)
where
Cn Succeeding stage output state of the cell under consideration.
p
C i-1 Subsisting state of the cell (the left neighbor).
p
Ci Subsisting state of the cell under consideration.
p
C i+1 Subsisting state of the cell (the right neighbor).
The simplest behavioral form of random test sequence generation with Rule 45
of CA can be demonstrated as below (here, black cell represents state “1” and white
cell represents state “0”).
Equation (1) and Fig. 1 detail that the next state value (“0” or “1”) of a cell under
consideration depends upon its subsisting state as well as the subsisting state of its
neighbor cells and is presented in a numerical representation as shown below:
A PUF is a device that takes an input which is often referred to as “challenge” and
yields an output (response) unique to both, the applied challenge and to the individual
PUF. In another simple statement, “Two PUFs given the same challenge will generate
two different responses”. Due to this, PUFs are considered beneficial in identifying
devices [15].
An Arbiter PUF uses the differing instantaneous time arrivals (due to process
variations) of a propagating signal in similar circuits. The Arbiter PUF structure
considered in this paper is similar to the one in [16] and is shown in Fig. 2. The
Arbiter PUF structure is characterized by a series of switches that use a select bit
to regulate how a signal is propagated through it until it is saved at the end of the
series of switches. As shown in Fig. 2, the select bit for the Arbiter PUF comes from
the challenge being fed to it, which in this paper is the result of Cellular Automata
Rule 45 and Authentication Password bits. Due to this, a more level of randomness
is created. Further, the addition of two inverters at the output of every switch assures
an additional delay into the signal at every switch and supports the cause of attaining
a good range of PUF features.
Challenge
Output
D Q Response
Bit 0
Arbiter Element
Input
Output
D Q Response
Bit 63
Arbiter Element
5.1 Implementation
The security architecture shown in Fig. 3 for accessing functionality of the system
on a chip has been simulated with help of Xilinx Vivado 2016.4v suite. The imple-
mentation with component-level details of the architecture could be best described
in three broad steps as below:
(1) Generating 64-bit length sequence with CA Rule 45 and Authentication Pass-
word: Rule 45-based Cellular Automata is considered for generating a 32-bit
random sequence. The structural along with the logical and mathematical prop-
erties of Rule 45 of CA, which help in the production of the random sequence
have been detailed in Sect. 3 of this paper. Another 32-bit length data sequence
is provided to the input register of the circuit as part of providing a password for
authentication. The 32-bit data emerging from both the sources is driven into
the respective registers CA_DATA and PWD_DATA through MUX to which
control signal is generated with the help of a centrally placed “Control Circuit”,
64 bits
Arbiter
Encrypted Data PUF
S
E
64 bits C
U
R S
I Y
Challenge Response
CA_DATA PWD_DATA S
Pair Validation-ROM T
Y T
32 bits 32 bits E
A M
Control
Circuit R
C O
H N
MUX Logical
I
‘1’/ ‘0’ C
T
E H
32 bits 32 bits C I
T P
Cellular AUTH_PWD U
Automata R
Random E
Number
Generation
Access to Functionality
Granted/ Denied
which also generates the control signal and concatenates the data in both the
registers to form an encrypted 64-bit sequence as shown in Fig. 3.
(2) Arbiter PUF processing and validating the Challenge-Response Pair: The
encrypted 64-bit data is driven to the Arbiter PUF circuit through a control
signal generated from the control circuit as shown in Fig. 3. After the reception
of 64-bit response from the Arbiter PUF circuit, Challenge-Response Pair (CRP)
validation is done via stored data in the ROM of the architecture and a logical
“1” or “0” is generated declaring the “match” or “mismatch”, respectively.
(3) Authentication to SoC Functionality: The generation of logical “1” or “0” is
stored in the register (Logical “1”/“0”) as shown in Fig. 3. Depending on the
value from this register (“1”—Access Granted and “0”—Access Denied); the
central control circuit drives the signal which is responsible for accessing the
functionality of the associated system on a chip.
The advantages of having the architecture as shown in Fig. 3 as part of SoC are stated
below.
1. With unanticipated attacks from various sources due to growing smarter devices
in IoT arena, “inscribing the process of authentication” for SoC designs would
avoid the access of its functionality to the adversary.
2. The inclusion of Arbiter PUF in the security architecture as a key component
helps in curbing the “Piracy of chip” which is known to be growing on a rapid
scale as part of black/second-hand market.
3. The technique of encryption that involves the Authentication Password and the
random pattern generated by Cellular Automata Rule 45 does not allow the easy
prediction of the challenge bits, thereby creating “anonymity to the attacker” in
guessing an exact sequence at a particular point of processing time.
4. Software or fault generation attacks that generally use the communication inter-
face of the device and try to exploit the security vulnerabilities found in cryp-
tographic algorithms could be a hard task for the attacker, as the 32-bit length
Authentication Password which forms the input interface gives the attacker the
opinion that the internal security architecture is of 32 bits too, which actually is
64 bits and thereby “flawing the attempt”.
5. The failed attempt of software attacks may generally push the attacker to opt
for invasive attacks like reverse engineering and micro-probing. Though one
cannot totally prevent this type of attack, they usually come at a high cost and
require a good knowledge of the circuits and their characteristics. These types
of attack usually require a substantial amount of time to study the entire security
architecture and in such instances, the “Arbiter PUF” circuit restricts the attacker
to take control of only a single chip due to the alterations created during the
manufacturing process that cannot be controlled even at a higher vigil.
Cellular Automata and Arbiter PUF-Based Security Architecture … 277
6 Conclusion
References
11. Kumar R, Burleson W (2012) PHAP: password based hardware authentication using PUFs.
In: 2012 45th annual IEEE/ACM international symposium on microarchitecture workshops,
Vancouver, BC, 2012, pp 24–31
12. Suh GE, Devadas S (2007) Physical unclonable functions for device authentication and secret
key generation. In: 2007 44th ACM/IEEE design automation conference, San Diego, CA, pp
9–14
13. Devadas S, Suh E, Paral S, Sowell R, Ziola T, Khandelwal V (2008) Design and implementation
of PUF-based “unclonable” RFID ICs for anti-counterfeiting and security applications. In: 2008
IEEE international conference on RFID, Las Vegas, NV, pp 58–64
14. Frikken KB, Blanton M, Atallah MJ (2009) Robust authentication using physically unclonable
functions. In: Samarati P, Yung M, Martinelli F, Ardagna CA (eds) Information security. ISC
2009. Lecture notes in computer science, vol 5735. Springer, Berlin, Heidelberg
15. Kerr S, Kirkpatrick MS, Bertino E (2010) Pear: a hardware based protocol authentication
system. In: Proceedings of the 3rd ACM SIGSPATIAL international workshop on security and
privacy in GIS and LBS, ser. SPRINGL’10. ACM New York, NY, USA, pp 18–25
16. Sargent E, Weston J Authentication using a physically unclonable function. Department of
Electrical and Computer Engineering, Utah State University. https://spaces.usu.edu/display/u
suece5930hpsec/Home
Wideband Circular Polarized Binomial
Antenna Array for L-Band Radar
Abstract This paper presents a compact binomial rectangular patch antenna array
with wide bandwidth and circular polarization for the L-band radar application oper-
ating at the frequency of 1.35 GHz. A 16-element planar antenna array is designed
with independent coaxial feed to each element which helps in reduction of negative
effects of complex feed network on antenna performance. The patch elements are
truncated diagonally which is useful in producing circular polarization. There is an
air cavity in between the ground plane and substrate which helps in improving the
gain of the antenna and the cumulative effect of the truncated patches and the air
cavity will increase the bandwidth of the antenna. The inter-element spacing in the
antenna array is considered as half of the free space wavelength. The antenna ele-
ments are fed with nonuniform amplitudes to reduce the side lobe levels which are a
major problem in radar applications for this Pascal triangle coefficient are considered
for feeding the antenna elements.
1 Introduction
Wideband planar arrays are been widely used in wireless communication and radar
applications. The major problems with these antennas are the presence of the side
lobes and complex feed networks which degrade the performance of the antenna
N. A. Rao · S. Kanapala
Department of ECE, Vignan’s Foundation for Science, Technology & Research, Vadlamudi,
Guntur, Andhra Pradesh, India
e-mail: anandnelapati@gmail.com
S. Kanapala
e-mail: satishkanapala@gmail.com
M. Sekhar (B)
Acharya Nagarjuna University, Guntur, Andhra Pradesh, India
e-mail: sekhar.snha@gmail.com
[1, 2]. Researchers have dedicated their research to reduce the side lobe levels and
negative effects of feed network. Antenna gain and axial ration are affected signif-
icantly by the feed network [3]. To overcome these effects, individual coax feed is
provided to all the antenna elements. In this paper, a 16-element planar patch antenna
array with individual feeding ports to each antenna element is considered to exam-
ine the effects of mutual coupling and reduction of the side lobe levels by giving
nonuniform excitations based on the Pascal triangle coefficients [4]. The interspace
between each antenna element is considered to be half of the free space wavelength.
The simulation results depict that the side lobe levels are reduced to a significant
level with nonuniform excitations and the effect of mutual coupling is very low for
the proposed design along with wide bandwidth and circular polarization.
The detailed structure of the proposed single antenna element is shown in Fig. 1
and the antenna are designed with a diagonally truncated rectangular patch fed with
a coaxial feed of 50 impedance. An air cavity of 8 mm is placed in between
the substrate and the ground plane which enables to achieve high gain [5]. For
the fabrication purposes, this air cavity is filled with a foam material. The antenna
is designed on an RT Duroid 5880 substrate, with a thickness of 62 mils and the
relative dielectric constant of 2.2 an aluminum plate of 1 mm thickness is taken
as the ground plane. The microstrip patch antenna produces circular polarization
which is achieved with diagonal truncations to the patch. The cumulative effect of
the truncated patches and the air cavity will increase the bandwidth of the antenna.
The optimized dimensions of the antenna element are shown in Fig. 1.
Figure 2 shows the layout of sixteen element microstrip patch antenna array
with planar geometry. The inter-element spacing in between the antenna elements is
considered as 0.5 λ (free space wavelength).
The simulated plot of the reflection coefficient for the proposed antenna is shown
in Fig. 3, as can be seen from the plot the working frequency of the antenna is
1.21–1.40 GHz.
Figure 4 shows the mutual coupling effect of the antenna elements and from the
plot, it can be observed that the proposed antenna model is having very low mutual
coupling effect, and this is achieved because of the cumulative effect of having 0.5
λ inter-element spacing in between the antenna elements and using the diagonally
truncated patch elements [6].
282 N. A. Rao et al.
XY Plot 1 HFSSDesign1
0.00
Curve Info
Name X Y dB(S(1,1))
Setup1 : Sweep
-2.50 m1 1.3500 -15.6517
-5.00
-7.50
dB(S(1,1))
-10.00
-12.50
-15.00 m1
-17.50
-20.00
1.00 1.13 1.25 1.38 1.50 1.63 1.75
Freq [GHz]
dB(S(1,2))
-20.00 Setup1 : Sweep
dB(S(1,3))
Setup1 : Sweep
-25.00
dB(S(1,4))
Setup1 : Sweep
-30.00
Y1
-35.00
-40.00
-45.00
-50.00
1.00 1.13 1.25 1.38 1.50 1.63 1.75
Freq [GHz]
In Fig. 5, the two-dimensional (2D) radiation pattern of the diagonally truncated sin-
gle antenna element is shown. Omnidirectional radiation properties are been observed
in both elevation and azimuthal planes.
In Fig. 6, the two-dimensional (2D) radiation pattern of the diagonally truncated
array antenna is shown. It is observed that the majority of the radiation is concentrated
in the bore side of the antenna array. A side lobe level of 13 dB is observed for
Wideband Circular Polarized Binomial Antenna Array for L-Band … 283
-30 30 -30 30
2.00 5.00
-6.00 0.00
-60 60 -60 60
-14.00 -5.00
-22.00 -10.00
-90 90 -90 90
-180 -180
E-plane H-plane
Radiation Pattern 1
0 Radiation Pattern 2
-30 30 0
10.00 -30 30
10.00
0.00
-60 60 0.00
-10.00 -60 60
-10.00
-20.00
-20.00
-90 90
-90 90
the antenna array and nonuniform amplitude technique for antenna elements for
minimization of these side lobe levels is presented in the further section.
284 N. A. Rao et al.
3.3 Polarization
In Fig. 7, axial ratio plot of the proposed antenna array is shown. The axial ratio of
the antenna array is observed to be 2.85 dB. A stable axial ratio is observed for the
entire bandwidth for the antenna array.
The simulated gain of single antenna element and Antenna array are shown in
Figs. 8,9. A gain of 7.86 dB is achieved for single antenna element and a gain of
18.39 dB is achieved for antenna array the measured 3-dB bandwidth is 66° for single
antenna element and 30° for Antenna Array.
From Fig. 9, we can clearly observe that the side lobe levels are very low and the
value of the side lobe level is −13 dB which is a quite good value for the antenna array
applications. But for the phased array and radar applications, further minimization
of the side lobe level is required and here, we propose nonuniform input amplitude
technique for further reduction of these side lobe levels [7, 8].
For this, we have considered the Pascal triangle coefficient as the inputs to the
antenna array elements [9, 10]. Each row is feed with the amplitudes of 1, 2, 2, and
1 to the first, second, third, and fourth elements, respectively, and is repeated for all
the four rows. By applying these nonuniform amplitudes to the array elements, the
side lobes levels are reduced to a very low level which can be seen in Fig. 10.
XY Plot 3 HFSSDesign1
70.00
Curve Info
Name X Y
dB(AxialRatioValue)
m1 0.0000 2.8503 Setup1 : LastAdaptive
60.00 Freq='1.35GHz' gx='145mm' Phi='0deg'
50.00
dB(AxialRatioValue)
40.00
30.00
20.00
10.00
m1
0.00
-200.00 -150.00 -100.00 -50.00 0.00 50.00 100.00 150.00 200.00
Theta [deg]
XY Plot 4 HFSSDesign1
10.00
Name X Y m1 Curve Info
m1 0.0000 7.8674 dB(GainTotal)
Setup1 : LastAdaptive
m2 -33.0000 4.8471 m2 m3
5.00 m3 33.0000 4.8684
Freq='1.35GHz' Phi='0deg'
0.00
dB(GainTotal)
-5.00
-10.00
-15.00
-20.00
-25.00
-30.00
-200.00 -150.00 -100.00 -50.00 0.00 50.00 100.00 150.00 200.00
Theta [deg]
XY Plot 5 HFSSDesign1
20.00 m1
Name X Y Curve Info
m1 0.0000 18.3972
m3 m2 dB(GainTotal)
m2 10.0000 15.2661
15.00 Setup1 : LastAdaptive
m3 -10.0000 15.1152
Freq='1.35GHz' Phi='90deg'
m4 -33.0000 4.4218
m5 33.0000 5.7290
10.00
m5
m4
5.00
dB(GainTotal)
0.00
-5.00
-10.00
-15.00
-20.00
-25.00
-200.00 -150.00 -100.00 -50.00 0.00 50.00 100.00 150.00 200.00
Theta [deg]
4 Conclusion
XY Plot 5 HFSSDesign1
20.00 m1
Curve Info
Name X Y m5 m4
dB(GainTotal)
m1 0.0000 17.7734
Setup1 : LastAdaptive
m2 28.0000 -18.6817 Freq='1.35GHz' Phi='90deg'
10.00 m3 -28.0000 -18.3641
m4 8.2450 14.7721
m5 -8.2294 14.7735
0.00
dB(GainTotal)
-10.00
m3 m2
-20.00
-30.00
-40.00
-200.00 -150.00 -100.00 -50.00 0.00 50.00 100.00 150.00 200.00
Theta [deg]
element which produces circular polarization and reduces the mutual coupling in
between the antenna elements. The gap introduced in between the substrate and the
ground plane is helpful for achieving high gain. The realized antenna array has a wide
bandwidth and satisfactory radiation performance along with circular polarization.
The simulation results of the antenna array show that the proposed antenna is good
for phased array and radar applications.
References
1. Shu T (2011) Design considerations for DBF phased array 3D surveillance radar. In: IEEE CIE
international conference on radar, vol 1, pp 360–363
2. Wen-bin G (2010) DBF multi-beam transmitting phased array antenna on LEO satellite. Acta
Electron Sin 38(12):2904–2909
3. Fu S (2009) Broadband circularly polarized slot antenna array fed by asymmetric CPW for
L-band application. IEEE Antennas Wirel Propag Lett 8(2010):1014–1016
4. Doane JP, Sertel K, Volakis JL (2012) A 6.3:1 bandwidth scanning tightly coupled dipole
array with co-designed compact balun. In: Antennas and propagation society international
symposium, Jul 2012
5. Ge L, Luk KM (2015) A three-element linear magneto-electric dipole array with beamwidth
reconfiguration. In: IEEE Antennas Wirel Propag Lett 14:28–31
6. Debogovic T, BartoliÄ J, Perruisseau-Carrier J (2014) Dual-polarized partially reflective sur-
face antenna with mems-based beamwidth reconfiguration. IEEE Trans Antennas Propag
62(1):228–236
7. Lafond O, Caillet M, Fuchs B, Himdi M (2010) Microwave and millimeter wave technologies
modern UWB antennasand equipment. InTech
Wideband Circular Polarized Binomial Antenna Array for L-Band … 287
1 Introduction
K. Nath (B)
Department of Information Technology, North-Eastern Hill University, Shillong 793022, India
e-mail: keshabnath@live.com
S. Roy
Department of Computer Applications, Sikkim University, 6th Mile, Samdur, Tadong, Gangtok,
Sikkim 737102, India
e-mail: sroy01@cus.ac.in
a document (effected due to aging). For development of high accuracy OCR system,
segmentation of line, word, and characters are need to be performed effectively.
It is relatively easy to perform segmentation of those scripts such as roman, which
having well-shaped and well-spaced characters. However, due to presence of headline
(matra), lower/upper modifiers and compound characters in most Indian scripts, it
makes the task of segmentation more challenging. It is difficult for a classifier to
tackle the compound characters, as many features need to be considered such as
height, width, shape, space and the touching area, etc. One of the major problems
with compound characters is that their shape totally changes after combination and
it is difficult for the classifiers to classify even if the segmentation is done correctly.
Most of the prior works are devoted to segmentation of regular characters and a very
few works are found on lower/upper modifiers segmentation. However, in the case
of touching characters, the amount of work done is almost negligible. Some good
survey on character segmentation can be found in [1–4]. These studies are mainly
focused on character separation on touching/compound character segmentation.
To tackle compound characters, researchers use different approaches such as neu-
ral network, histogram, and fuzzy logic. In this paper, we experiment with fuzzy
logic and rough set-based soft clustering to detect segmentation line or area between
two touching characters in few Indian scripts.
We organize our paper as follows. In Sect. 2, we discuss in detail about the seg-
mentation of touching characters. Sect. 3 includes soft clustering techniques for
segmentation of touching characters. The performance evaluation of the soft cluster-
ing approach is reported in Sect. 4. Finally, in Sect. 5, the paper is summarized with
concluding remarks.
Wc > Ws Ws
Wc
Garain and Chaudhuri [5] propose a technique based on fuzzy modification analy-
sis to segment touching characters. In their approach, a fuzzy membership is assigned
to all the columns by evaluating each column against some factors on which segmen-
tation depends, on an Additive Standard Multifactorial (ASM) function [6] which
is used for this multifactorial evaluation.
Kahan et al. [7] try to combine several techniques to improve the overall recog-
nition rate. They use an objective function which can be defined as the ratio of the
second derivative of the curve for the vertical projection to its height. Their method
is improved by introducing a peak-to-valley function [8].
Chaudhury et al. [9], propose a unique technique called junction detection for
segmentation of lower modifiers. In this approach, they divide a text line into two
equal parts called upper and lower. A scanning for the pixels are performed starting
from the upper right corner of the lower half of the character. If no pixel found in the
bounding box, then the corresponding character does not have any lower modifier.
Otherwise, a tracing is performed for finding junction pixel, lower pixel of bounding
box and presence of any loop in lower part. Decision for existence and non-existence
of lower modifier is made based on pixel presence in lower part. But according to
them, this technique is also not effective for segmenting all kinds of lower modifiers
and it gives false segmentation results for the character like (ha). Though they
have proposed an idea to handle character like , but practically it is not effective
for segmentation.
Next, we discuss different soft versions of popular k-means clustering and apply
them to segregate such touching characters.
Soft clustering allows data point to be part of more than one sets. Hence for the
conjunct character, one may consider the pixel points of one character to be present in
292 K. Nath and S. Roy
one cluster and other character in another cluster. Pixels which are present in between
(touching area) can be considered as a member of both the clusters. So to perform
soft computing approach on touching characters, it first scans the character and store
the number of pixels present in x-y (height-width) plan. The number of pixels present
in x-direction with respect to the particular value of y are treated as x coordinates
and number of pixels present in y-direction with respect to a particular value of x are
treated as y coordinates. Hence, we can directly apply soft computing approach to the
points data (pixels data in terms of x and y) obtained from the characters. For the last
few decades, a plethora of clustering methods has been proposed to segment OCR.
K -means [10] is one of the popular algorithms used for the similar purpose. Looking
into its simplicity in implementation, several soft versions of k-means are introduced
and applied successfully as an effective unsupervised method to the problems having
overlapping data distributions. In recent research, we evaluate their capability in
detecting overlapping clusters and found very effective [11]. Motivated by our initial
results, we decide to apply them in segmenting touching characters. Next, we will
discuss some of the soft k-means (also called c-means) clustering techniques to detect
the breaking line between two touching characters.
Bezdek [12], introduced the Fuzzy C-Means (FCM) algorithm, which is a fuzzifica-
tion of hard k-means. In fuzzy c-means, objects are allowed to participate in multiple
clusters. A membership score for each object is calculated by measuring the distance
between the object and the cluster centers of each cluster. A high membership score
of an object with respect to (w.r.t) a cluster center represents the high belongings
towards that cluster and a low score represent that there exists a huge difference
or distance between them. It extracts k number of clusters from a set of n objects
O = {O1 , O2 , . . . On }, by optimizing the following objective function.
k
n
J= (µi j )m 1 O j − xi 2 (1)
i=1 j=1
where, µi j ∈ [0, 1] is the membership of the object O j to cluster Ci , m 1 is the
fuzzifier, which range is from 1 to ∞ and di j stands for distance norms. For a cluster
Ci , its centroid ci is calculated as follows.
1
n
x i = n (µi j )m 1 O j (2)
j=1 (µi j )
m1
j=1
When it is difficult to decide in which cluster the data object needs to be assign,
a tradition k-means clustering is not suitable in this case. To handle the situation
of uncertainty, Lingras and West [13] introduced classical k-means clustering with
rough set [14] concept. The structure of each cluster Ci is defined by a centroid
xi , a lower bound Z (Ci ), and upper bounds Z (Ci ). In rough-k-means, an objective
function is defined based on the elementary concept of upper and lower bounds.
Objects which are more close (with respect to user defined threshold) to the centroid
of the cluster prototype is assigned to the lower bound. Object are assigned to the
upper bound of a cluster, if there is an uncertainty of belongingness towards the
cluster. An object may participate in multiple upper bound of different clusters, if
the object is equally close to other cluster centroids.
Fuzzy c-means (fuzzy k-means) use crisp lower and upper bounds. It bring more
fuzziness in it, Mitra et al. [15] present fuzzy boundary (B(Ci ) = Z (Ci ) − Z (Ci ))
into the concept. They incorporate fuzzy membership µi j , which is similar with the
fuzzy c-means (or fuzzy k-means) membership. Objects proximity towards a cluster
is calculated by the membership function. The process of object assignment to the
upper and lower bound of a cluster is identical with RKM. Objects which have a
membership value difference from two different clusters is lesser or equal to a pre-
defined threshold are considered as overlapping objects. A rough cluster structure
is assumed to be present within the region between upper and lower bound of the
cluster prototype.
Next, we analyze the effectiveness of the soft computing approaches in segmenting
conjunct characters on various datasets.
4 Experimental Results
We apply soft k-means on different conjunct characters from Indian scripts such as
Assamese, Bangla, and Devanagari. We use good numbers of conjunct characters
for each script from the Indian script repository (CVPR unit1 ). Devanagari script
contains 154 conjunct characters and both Bangla and Assamese scripts contain 165
1 https://www.isical.ac.in/~ujjwal/download/database.html
Soft Clustering for Segmenting Touching Characters in Printed Scripts 295
and 133 conjunct characters, respectively. The points shown in green color represent
the breaking line of the conjunct characters detected by an algorithm. In absence of
adequate segmented samples, we take the help of visual interpretation to evaluate
the segmentation results. From the results, it is evident that Fuzzy c-means, Rough
k-means and Rough fuzzy k-means are effective in detecting the segmentation line.
Some of the results produced by FCM, RKM, and FRKM are reported in Figs. 2, 3,
and 4. The accuracy of detecting the segmentation line by soft computing approach
on various datasets is shown in Table 1. On both Devanagari and Assamese datasets,
FRKM outperform both FCM and RKM with a performance difference by 1.07% and
1.03%, respectively, on Devanagari dataset and on Assamese dataset, the difference
is about 1.09% and 1.05%, respectively. FCM generate outstanding results on Bangla
dataset. A performance difference of 1.02% is obtained between FCM and RKM.
With FRKM, the difference is about 1.06%.
5 Conclusion
For developing a high accuracy OCR system, the preprocessing steps such as seg-
mentation of lines, words, and characters needs to be performed correctly. We can
296 K. Nath and S. Roy
achieve high accuracy segmentation rate on regular lines, words, and characters, but
it becomes more complex, in case of touching characters. We assume each single
character as a separate cluster and the touching area between them as an overlapping
region. Since rough set and fuzzy concept produces effective results in detecting
both disjoint and overlapping objects. Hence, in this work, we applied soft k-means
clustering approach for segmenting conjunct characters. In case of touching charac-
ters, we achieve satisfactory results. However, for segmenting lower/upper modifiers
and characters, which are not formed by side to side combination, soft computing
approach generate erroneous results. Our future work is to propose an efficient soft
clustering approach, which may handle almost all segmentation issues.
References
Abstract A pentaband direct fed circular patch antenna with ring fractals is pro-
posed in this paper. The design and analysis of proposed antenna are done by using
Ansoft HFSS. To enhance the bandwidth of patch antenna, fractals are implemented.
The proposed antenna is designed to four iterations on the circular patch, which
covers WLANs and WiMAX applications. The impedance bandwidths achieved are
390 MHz, 330 MHz, 190 MHz, 380 MHz, and 860 MHz at the resonant frequencies
1.68 GHz, 2.49 GHz, 2.73 GHz, 3.94 GHz, and 5.65 GHz, respectively. The corre-
sponding peak gains at their resonant frequencies are 2.789 dB, 3.574 dB, 4.658 dB,
3.427 dB, and 4.803 dB respectively. The designed antenna is used for S-band and
C-band applications.
1 Introduction
operate in broad and multi-bands that include Bluetooth, WiFi, WiMAX and GPS
services.
To obtain multiband resonances in the antenna design, various approaches are
held. For example, slots cut in the patch, an antenna with sorting pin and fractal
implementations on the patch. In many literature surveys, Koch snowflake fractal
design [3, 4], the Apollonian gasket of circles [5], planar inverted F-antennas [6] and
U-slot [7] are designed to achieve multiband resonances of the patch antenna. In [8],
partial ground introduced in the novel like coin fractal antenna is designed to operate
at four bands. In [9], a various number of fractal antennas were discussed. These
types of antennas are mostly used to design wideband, multiband, low profile and
small antennas and also to reduce the size. In [10], by implementing fractals to the
corners of polygon shape antenna, bandwidth is enhanced to cover ultra wideband
around 7–10 GHz. In [11], proposed antenna design shape is like the face of the first
iteration in the patch to achieve dual-band response at 2.54 GHz and 6.47 GHz with
return loss-22.6 dB and −18.9 dB respectively.
In this paper, a single-layer edge feed circular ring shape patch antenna designed
with the multiband facility is presented. This antenna can operate at 1.68, 2.49,
2.73, 3.94, and 5.65 GHz with perfect matching. Return loss, VSWR, and radi-
ation characteristics are determined. This antenna covers various applications,
including the DCS1800 (1,710–1,820 MHz), LTE33-41 (1.9–2.69 GHz), Bluetooth
(2.4–2.4835 GHz), GPS (L1, L4), BDS (B1), GLONASS (L1), GALILEO (E1, E2),
WLAN (802.11b/g/n:2.4–2.48 GHz), LTE and WiMAX systems.
2 Antenna Design
The primary design step in the antenna design is to select a suitable substrate [12]. The
substrate material properties mainly depend on the dielectric constant and thickness.
The lower dielectric constant material is consistent for antenna and thickness of
substrate should be large enough to obtain a broadband and increase the efficiency.
If the thickness is too large then there is a problem with surface-wave excitation. The
thickness of the substrate should satisfy the following formula:
0.3 v
h≤ √ , (1)
2π fr εr
F
a 21 , (2)
πF
1 + π2h
Fεr
ln 2h
+ 1.7726
where
√
F 8.791 ∗ 109 fr εr (3)
Figure (1) shows the circular patch antenna design with the partial ground. The
parameters are listed in Table 1.
Figure 2 shows the circular ring fractals implemented to the circular patch as
shown in Fig. 1. The parameters are listed in Table 1. The outer (R7) and inner radii
(R8) of the fourth order iteration or proposed design are 2.135 mm and 1.6 mm
respectively.
Fig. 2 Proposed antenna design a zeroth, b first, c second, d third, e fourth iteration
Fig. 3 Comparison of simulated return loss of proposed antenna for the zeroth, first, second, third
and fourth iteration
Fig. 4 Comparison of simulated VSWR of proposed antenna for the zeroth, first, second, third and
fourth iteration
with −34.16 dB and 5.51 GHz with −43.24 dB to the corresponding bandwidths
710 MHz (2.15 GHz–2.86 GHz), 590 MHz (3.66 GHz–4.25 GHz) and 600 MHz
(5.17 GHz–5.77 GHz) respectively. The simulated return loss for initial design or
zeroth iteration is shown in Fig. 3. At these triple bands, there is a perfect matching
condition shown in the VSWR plot as shown in Fig. 4.
Figure 5 shows the simulated return loss of the proposed antenna design of
fourth order iteration as shown in Fig. 2e. This antenna achieves five multi-
bands at 1.68 GHz with −12.28 dB, 2.49 GHz with −23.63 dB, 2.73 GHz
with −14.03 dB, 3.94 GHz with −23.48 GHz and 5.65 GHz with −25.11 dB
to the corresponding impedance bandwidths 290 MHz (1.55 GHz–1.84 GHz),
330 MHz (2.25 GHz–2.58 GHz), 190 MHz (2.66 GHz–2.85 GHz), 380 MHz
(3.72 GHz–4.10 GHz) and 880 MHz (5.08 GHz–5.94 GHz) respectively. Figure 6
shows the VSWR of the proposed antenna design.
304 Ch. Murali Krishna et al.
The impedance bandwidths and return loss for each order of the iteration are
listed in Table 2. From the observations of return loss, reflection coefficients become
smaller by increasing the order of iteration due to resonant frequencies tending to be
stable.
4 Far-Field Results
The simulated far-field characteristics are presented in this section. The 3D gain
polar plots for the resonant frequencies are shown in Fig. 7. The corresponding peak
gains are 2.789 dB, 3.574 dB, 4.658 dB, 3.427 dB, and 4.803 dB with respect to their
corresponding resonant frequencies respectively.
Bandwidth Enhancement of Circular Ring Fractal Antenna … 305
Figure 8 shows the azimuth and elevation planes at their resonant frequencies.
From the plots, it is observed that Omnidirectional pattern is obtained at 1.68, 2.49
and 2.73 GHz and bidirectional is achieved at 3.94 and 5.65 GHz.
Fig. 7 3D gain polar plots at a 1.68 GHz, b 2.49 GHz, c 2.73 GHz, d 3.94 GHz, e 5.65 GHz
Bandwidth Enhancement of Circular Ring Fractal Antenna … 307
Fig. 8 Radiation pattern characteristics at a 1.68 GHz, b 2.49 GHz, c 2.73 GHz, d 3.94 GHz, e
5.65 GHz (red-elevation plane, black-azimuth plane)
308 Ch. Murali Krishna et al.
Fig. 9 Magnitude surface current distribution at a 1.68 GHz, b 2.49 GHz, c 2.73 GHz, d 3.94 GHz,
e 5.65 GHz
6 Conclusion
References
3. Wu CT, Dai WZ, Chiu CN, Hsieh HC (2015) Bandwidth enhancement of microstrip fed Koch
snowflake fractal slot antenna. In: APEMC 2015
4. Sankarnarayan D, Kiran DV, Mukherjee B (2016) Koch snow flake dielectric resonator antenna
located with a circular metallic patch for wideband applications. In: URSI Asia-Pacific radio
science conference, Aug 2016
5. Mukherjee B, Patel P, Mukherjee J (2014) Hemispherical dielectric resonator antenna based
on the apollonian gasket of circles-a fractal approach. IEEE Trans Antennas Propag 62(1)
6. Chen JH, Ban YL, Yuan HM, Wu YJ (2012) Printed coupled fed PIFA for sven band
GSM/UMTS/LTE WLAN mobile phone. J Electromagn Waves Appl 26(2–3):390–401
7. Xu R, Li JY, Yang JJ, Wei K, Qi YX (2017) A design of U-shapes slot antenna with broadband
dual circularly polarized radiation. IEEE Trans Antennas Propag
8. Yu Z, Yu J, Ran X, Zhu C (2017) A novel ancient coin-like fractal multiband antenna for
wireless applications. IJAP (2017). Hindawi
9. Srivastava P, Singh OP (2015) A review paper on fractal antenna and their geometries. In:
AEICT 2015
10. Fallahi H, Atlasbaf Z (2015) Bandwidth enhancement of a CPW-fed monopole antenna with
small fractal elements. AEU 69:590–595. Elsevier
11. Sadiq BO (2016) Dual band fractal antenna design for wireless application. Comput Eng Appl
5(3)
12. Garg R, Bhartia P, Bhal IJ, Ittipiboon A (2001) Microstrip antenna design handbook
13. Mayborod DV (2012) Dualband circular disk microstrip antenna. In: IEEE international on
ultra wideband and ultra short impulse signals, pp 161–163, Sep 2012
14. Balanis CA (1992) Antenna theory and design, 3 edn.
High Speed, High-Reliability Edge
Combiner Frequency Multiplier
for Silicon on Chip
Abstract Due to the advancement in technology, the designing the high-speed fre-
quency multiplier plays the vital role. In this paper, the high speed, the high-reliability
frequency multiplier is proposed. By employing an overlap canceller in edge com-
biner, the high speed highly reliable structure is achieved. The delay locked loop
(DLL) has been used to generate a wide range of frequency and high-frequency
range by applying logical effort, a proposed frequency minimize a delay which leads
to deterministic jitter. In proposing the high speed, high-reliability edge combiner
frequency multiplier for silicon on chip process technology is fabricated till 0.13 µm
and the output is in the range of 100 MHz–3.3 GHz. Here the power consumption is
achieved as 2.9 µw/MHz.
1 Introduction
R. Pichamuthu
Department of Computer Science and Engineering, Maha Barathi Engineering College,
Chinnasalem, Villupuram, India
e-mail: rajaramnov82@gmail.com
P. Periasamy (B)
Department of Electronics and Communication Engineering, SNS College of Engineering,
Coimbatore, India
e-mail: prakasamp@gmail.com
same an input clock. Digital-locked loop core is used in DLL based clock generator
and also in frequency multipliers [5–8]. Two blocks such as edge combiner and
a pulse generator are used in DLL generator. The pulse generator generates pulse
according to multiplier ratio control signal and generates clock using selected pulse.
The frequency generates the multiplied clock collecting multiphase clock and here
jitter does not occur. Multiplication ratio may vary accounting to the frequency
multiplier.
2 Related Works
Wang et al. [9] proposed a reprogrammable DLL frequency multiplier in 1.2 MHz for
wireless applications. CMOS-based DLL-based clock generator and area efficient
full adder has been proposed and implemented using 180 nM technologies [10,
11]. The DLL based clock generator for low power application and anti harmonic
lock for a wide range of frequency has been demonstrated [12, 13] and validated.
Due to technology development in recent past, many researchers have developed
frequency multipliers with different approach [14–19]. Based on the above literature,
the new frequency multiplier has been proposed which is the combination of the
multiplication ratio control logic based pulse generator with an edge combiner. The
dual edge triggered phase detector has been used for both edges of CLKREF , DCLK,
and CLKOUT . The delay locked loop has been locked within 300 cycles of oscillation
in which corners of dual edge detection uses the combination of 32 CLK and 32-
phase differential CLK. The 32-phase differential clock has been used to generate
the pulses. Maximum multiple clock frequency is used push-pull state, which is used
enhance multiple clock frequency. To overcome the speed and reliability, the edge
combiner has been combined with HSHR, which consists of a pre-combining stage.
High Speed, High-Reliability Edge Combiner Frequency … 313
3 Experimental Setup
The proposed High Speed, High Reliability with Edge Combiner frequency multi-
plier is shown in Fig. 2. Power consumption plays the major role while designing
any systems. However, because of high ambient temperatures, power dissipation is a
particular concern in automotive environments. Adaptive Voltage Scaling Adaptive
voltage scaling (AVS) is the adaptation or modification of the supply voltage for a
processor voltage-domain given the process strength. A processor from the strong
category of the silicon process has high dynamic performance (a strong device can
operate at a given frequency at a lower than nominal voltage), but it also has a higher
leakage current which causes higher power dissipation when operated at a nomi-
nal voltage. Because of process variations, the supply voltage of each device can be
adjusted to minimize power while still achieving desired performance. The minimum
supply voltage is resulting in the Jitter calculation.
Minimum supply voltages of weak silicon and strong silicon can result in supply
current differences that range from several tens to hundreds of milli-amperes, result-
ing in a substantially better SoC thermal performance each IC. The PMIC can then
read minimum supply voltage of each device and adjust the supply voltage level for
optimal performance.
The proposed HSHR-EC frequency multiplier is simulated using the ISE Xilinx
Foundation series 14.2 tool. The simulated output of the proposed HSHR-EC fre-
quency multiplier is illustrated in Fig. 3
The proposed HSHR-EC multiplier output is approximated to 2n bits by suitable
shift operations. Hence the bit error at the approximate output is not much significant.
Then this 2n bit output from the output mux is added to the previous 2n bit outputs.
314 R. Pichamuthu and P. Periasamy
In pre-combining stage, the two signals are combined into one signal and the ratio is
halved. The channel widths of the o/p buffer and PU-P have been normalized with
respect to the channel widths of the PD-N, Fmul, max, and ERRDUTY . Due to the pre-
combining processes in the proposed HSHR-EC frequency multiplier, two different
signals have been merged as a single signal. Also, it has been found the proposed
HSHR-EC frequency multiplier has the best performance as compared with other
reported frequency multipliers.
The simulated result of jitter calculation for the proposed HSHR-EC multiplier
is illustrated in Fig. 4. For the 100 MHz reference signal, the operating frequency
of the proposed HSHR-EC multiplier has been measured at 3.30 GHz. The value of
J rms and J pp has been found from Fig. 4 as 1.95 ps and 14.6 ps respectively. Where
J rms and J pp are rms time and peak to peak time of the jitter respectively.
5 Conclusion
In this paper, HSHR with Edge Combiner Frequency Multiplier for Silicon on a
chip has been proposed and the 0.13 µm CMOS technology using the frequency
multiplier is fabricated. This proposed design generates high-speed operation and
edge combiner structure which is used to overlap canceller. Hence, the delay variation
between negative and positive edge generation has been reduced.
References
1. Burd TD, Pering TA, Stratakos AJ, Brodersen RW (2000) A dynamic voltage scaled micro-
processor system. IEEE J Solid-State Circuits 35(11):1571–1580
2. Cao Z, Foo B, He L, van der Schaar M (2010) Optimality and improvement of dynamic
voltage scaling algorithms for multimedia applications. IEEE Trans Circuits Syst I Reg Papers
57(3):681–690
3. Elgebaly M, SachdevM (2007) Variation-aware adaptive voltage scaling system. IEEE Trans
Very Large Scale Integr (VLSI) Syst 15(5):560–571
4. Kim C, Hwang IC, Kang SM (2002) A low-power small-area, 7.28-ps-jitter 1-GHz DLL-based
clock generator. IEEE J Solid-State Circuits 37(11):1414–1420
5. Chien G, Gray PR (2000) A 900-MHz local oscillator using a DLL-based frequency multiplier
technique for PCS applications. IEEE J Solid-State Circuits 35(12):1996–1999
6. Lee TC, Hsiao KJ (2006) The design and analysis of a DLL-based frequency synthesizer for
UWB application. IEEE J Solid-State Circuits 41(6):1245–1252
7. Chuang CN, Liu SI (2007) A 40 GHz DLL-based clock generator in 90 nm MOS Technology.
In: IEEE International Solid-State Circuit Conferences Digest of Technical Papers. pp 178–595
8. Maulik PC, Mercer DA (2007) A DLL-based programmable clock multiplier in 0.18-µm
CMOS with −70 dBc reference spur. IEEE J. Solid-State Circuits 42(8):1642–1648
9. Wang C, Tseng YL, She HC, Hu R (2004) A 1.2 GHz programmable DLL-based fre-
quency multiplier for wireless applications. IEEE Trans Very Large Scale Integr (VLSI) Syst
12(12):1377–1381
10. Kim JH, Kwak YH, Kim M, Kim SW, Kim C (2006) A 120-MHz-1.8-GHz CMOS DLL-based
clock generator for dynamic frequency scaling. IEEE J Solid-State Circuits 41(9):2077–2082
11. Kalaiyarasi M, Prakasam P (2015) Proposed low power and area efficient full adder using
CMOS 180 nm technology. Int J Appl Eng Res 10(9):7103–7108
12. Koo J, Ok S, Kim C (2009) A low-power programmable DLL-based clock generator with
wide-range anti harmonic lock. IEEE Trans Circuits Syst II Exp Briefs 56(1):21–25
316 R. Pichamuthu and P. Periasamy
Keywords Pan tompkins algorithm · Pattern net · Fit net · Cascaded net
Feedforward net and ECG classification
1 Introduction
Electrocardiography (ECG) is a technique used to record the electrical activity of
the heart and observe the heart variation and abnormalities over a period of time
using electrodes placed on the skin. ECG signal can be divided into phases of
depolarization and repolarization of the muscle fibers which make up the heart
[1]. These phases consist of P-waves, QRS-complexes, and T-waves which provide
fundamental information about the electrical activities of the heart [2]. ECG signal
processing can be used to detect diseases like Arrhythmia, Supraventricular Arrhyth-
mia, Sleep Apnea, Normal Sinus Rhythm and Long-Term Atrial fibrillation [3]. Sleep
apnea is a sleep disorder characterized by cessations of breathing during sleep. There
are three types of sleep apnea: Central Sleep Apnea (CSA), Obstructive Sleep Apnea
(OSA) and mixed. An arrhythmia occurs due to factors like Coronary artery dis-
ease, electrolyte imbalances in blood, changes in heart muscle etc. Supraventricular
Arrhythmia is a type of arrhythmia causing an abnormally fast heart rhythm due to
unsuitable electrical activity in the heart. It begins in the areas above the heart’s lower
chambers, such as the upper chambers (the atria) or the atrial conduction pathways.
This disorder can result from rheumatic heart disease or an overactive thyroid gland.
Long-Term Atrial fibrillation (AF) involves the occurrence of an irregular heartbeat
where the atria fail to contract in a strong manner. The clinical risk factors for AF
include advancing age, diabetes, hypertension, congestive heart failure, rheumatic
and non-rheumatic valve disease, and myocardial infarction. The echocardiographic
risk factors for non-rheumatic AF includes left atrial enlargement, increased left
ventricular wall thickness, and reduced left ventricular fractional shortening. ECG
signals available from Physionet library provide a standard dataset for performing
all tests. ECG signal processing is used to convert the raw data into a form which
can be used for feature extraction (Fig. 1).
Discrete Wavelet Transform provides a method for feature extraction in which
the choice of the wavelet selected lies upon the application and the user. Wavelet
families include Biorthogonal, Coiflet, Harr, Symmlet, Daubechies [5] wavelets [6,
7]. Wavelet techniques are the ones most commonly used but are complex and time-
consuming. Hence, other techniques like Pan Tompkins algorithm can be used for
preprocessing and feature extraction, as it provides a higher level of decomposition
and is comparatively less time-consuming [8, 9]. Fast Fourier transform and other
techniques are used for preprocessing of the signal in order to remove noise and
baseline wandering [10]. Several classification techniques can be used for ECG
classification like Support Vector Machines (SVM), decision tree, neural network,
nearest neighbors, etc. [11]. The linear discriminant analysis is a linear classifier that
minimizes the interclass variance and maximizes the mean values of the two classes
to find a line in the lower dimension of feature space [12]. They do not take into
Fig. 2 Block diagram for pan tompkins algorithm to derive features used as ANN inputs
account the difference between adjacent sample points. Support Vector Machines
(SVM) on the other hand uses adjacent sample points to draw a discriminatory line
which is used for classification [12]. SVM is considered to give higher accuracies
and hence is preferable. Artificial Neural Networks (ANN) classifiers can be fed by
various parameters, some of which are spectral entropy, Poincare plot geometry, and
largest Lyapunov exponent (LPE) [1] (Fig. 2).
2 Feature Extractions
The raw ECG signal is processed to filter out the noise and extract the RR interval
using Pan Tompkins algorithm [9], which is further used to extract fifteen features out
of each signal. The extracted features are fed into four different neural networks for
training and are then validated using various test signals. The accuracy is calculated
for each neural network and each disease. The same process is performed for the
classification technique proposed below as well and the results are compared. ECG
signal includes noise as a part of the signal which needs to be removed before it is
processed for feature extraction. Pan-Tompkins algorithm is a real-time algorithm
for detection of the QRS complex in ECG signals, developed by Jambukia et al.
[9]. It reliably recognizes QRS-complexes on the basis of digital analysis of slope,
amplitude, and width. In this algorithm, a special digital bandpass filter reduces false
detections which can be caused due to various types of interferences present in ECG
signals. This filtering allows the use of low thresholds and hence helps in increasing
the detection sensitivity. Stepwise signal processing of raw signals of each disease
is graphically depicted in Figs. 3, 4, and 5
320 R. Karthik et al.
Feature extraction was done using Heart Rate Variability (HRV). HRV can be
defined as the interval between successive R peaks.
Fig. 5 ANN structure with 5 input neurons, 7 neurons in 1 hidden layer and 2 output classes
Implementation of Neural Network and Feature … 321
where r(i) is the R peak time for ith wave [3]. The extracted RR interval from each
data segment was used to extract a total of 15 features [3].
4 Proposed Method
The neural network is trained with 70% of total samples in each case and each sample
consist of 15 features by keeping the base as RR intervals. In our proposed method,
we trained 10 networks named Net1–Net10 (Table 1). Each Net consists of trained
features of two diseases and accuracy of each Net case has been projected in Table 2
(excluding normal case). Since four diseases have been taken and one normal case,
the total number of combinations formed by combining two cases at once will be ten
and each disease will repeat four times in this process (refer Table 1). By using ten
Nets, a condition Table (Table 1) is formed which clearly explains the parameters
that need to be considered during post-classification from NN output.
322 R. Karthik et al.
Table 2 Classification performance in terms of accuracy with 2 classes at a time, *denotes test file
used
Signal classes Cascade Feed forward Net Fit Net Pattern Net
Net
Arrhythmia*, Lang Term AT 66.67 93.33 93.33 80
Arrhythmia, Long-Term AF* 96 100 100 96
Long-Term AF*, Sleep Apnea 100 100 100 92
Lens Term AF, Sbrep Apnea* 62.5 70.83 70.83 75
Lens Term AF, Supraventricular 100 100 100 100
Arrkvthmia
Lens Term AF, Supravenlricuhir 100 100 100 100
Arrhvthmin*
Sleep Apnea Supraventricular 45.83 70.83 45.83 62.5
Arrhythmia
Sleep Apnea, Supraventricular 93.61 100 97.87 100
Arrkvthmia*
Arrhythmia*, Sleep Apnea 66.67 73.33 46.66 13.33
Arrhythmia, Sleep Apnea* 100 79.16 79.16 83.33
Arrhythmic* Supraventricular <1 20 <1 20
Arrhyrkmia
Arrhythmia Supraventricular 100 95.74 93.62 95.74
Arrhythmia*
⎡ ⎤ ⎡ ⎤
x11 x21 x31 ··· x1 15 x11 x21 x31 ··· x1 15
⎢ x21 x22 x23 ··· x2 15 ⎥ ⎢ x21 x22 x23 ··· x2 15 ⎥
⎢ ⎥ ⎢ ⎥
X train ⎢
⎢ .. .. .. ..
⎥ ⎢
.. ⎥ X testing ⎢ .. .. .. .. .. ⎥
⎥ (2)
⎣ . . . . . ⎦ ⎣ . . . . . ⎦
xn1 xn2 xn3 ··· xn15 xm1 xm2 xm3 ··· xm15
Implementation of Neural Network and Feature … 323
Each ECG signal is used to extract 15 features represented by {x1 , x2, x3… x15},
used to form the matrices given above with “n” as the number of samples used for
training and “m” as the number of samples used for validation. From each classifier,
we get two outputs which describes which out of the two classes the sample belongs
to. If the probability of obtaining the first (A) neuron as output is high, then the sample
is considered to belong to class 1 (winning class). Otherwise, it belongs to class 2
(winning class). In Table 1, each Net has two outputs- either the output belongs to
class A or class B. In Table 1, the last five columns A–E are the four disease and
one normal ECG case. The second column depicts the cases that are chosen for
testing and first column with Net (trained network varies from 1–10) respectively.
The number “0”, “1” and “2” are used in the table to denote the winning class as
explained above. For example, in the case of Net1, if two diseases sampled from
A and B are used to train the network, then “1” denotes that the sample belongs to
disease A, while “2” denotes that the sample belongs to disease B and “0” implies not
applicable diseases. In our proposed method, we trained 10 combinations of the five
classes and dedicated 10 networks and their accuracy is presented in Table 1. Using
these trained networks, the following process has been carried out. The condition
table is used to find the value of flag variables FA to FE.
For coding convenience, we use “1” for NetX and “2” for NetX’. Either NetX or
NetX’ will be high according to the condition table which is used to determine the
value of flag variables FA, FB, FC, FD and FE, the values ranging from 0 to 4.
Flag variables are defined as
FA = Sum (Net1, Net2, Net3, Net4); FB Sum (Net1’, Net5, Net6, Net); FC
Sum (Net2’, Net5’, Net8, Net9)
FD = Sum (Net3’, Net6’, Net8’, Net10); FE Sum (Net4’, Net7’, Net9’, Net10’)
Finally, Maximum of FA, FB, FC, FD, and FE will take as the output.
5 Results
For signal processing (feature extraction), the database of all the required samples was
collected from physionet.org. The database of the following diseases was collected
from the MIT-BIH database [2, 19]:
• Sleep Apnea
• Normal Sinus Rhythm
• Long-term Atrial fibrillation
• Arrhythmia
• Supraventricular Arrhythmia.
Each file was downloaded as a Matlab file and used for further processing. 70% of
the sample was taken for training and rest 30% was used for testing. Taking 2 diseases
at a time, the ANN was trained and tested with test files of both classes and its accuracy
was calculated (Table 2). *denotes that the particular case’s test signal was used while
324 R. Karthik et al.
testing. For more clarity, accuracy of each trained network has been presented in two
rows with four columns which include Cascade Net, Feedforward Net, Fit Net and
Pattern Net. As observed from Table 2, maximum accuracy is obtained on training
the network with Long-Term AF and Supraventricular Arrhythmia and least accuracy
is obtained with classes Arrhythmia and Supraventricular Arrhythmia. Amongst all
networks, normal Feedforward network which uses SCG algorithm yields the best
result. Amongst Arrhythmia and Long-Term AF, Feedforward and Fit Net produce
the best result. Similarly, amongst long-term AF and Sleep Apnea, Feedforward and
Fit Net produce the best result. But in the case of sleep Apnea and Supraventricular
Arrhythmia, Feedforward and pattern Net produce the best result and in the case of
Arrhythmia and sleep apnea, cascade and Feedforward produce average results.
Since the samples from Arrhythmia, Supraventricular Arrhythmia are similar
because of similarity in the disease, accuracy can be improved by providing more
samples for training. DNN can be used to extract features more accurately. In terms of
networks, for cascaded Net, Long-term AF and Supraventricular Arrhythmia produce
100% accuracy while Arrhythmia and supraventricular produces < 51% accuracy.
Similarly, for other three networks, Long-term AF and Supraventricular Arrhythmia
produce 100% accuracy.
To compare the results obtained through our method, all four networks were
trained using all test samples which resulted in five classes and results of the mul-
ticlass network are shown in Table 3. It also shows the results obtained from the
proposed method using the dual classifier (binary classifier). Results are obtained for
all four diseases and the normal case.
From Table 3, it can be inferred that in most of the cases, the proposed method
produces better results and better results are obtained while classifying Long-Term
AF, like with cascade net, the accuracy has been increased to more than 87%. In case
of normal disease samples, accuracy of three amongst four networks increased by
9.09%. Similarly, accuracy of the proposed method in classifying Arrhythmia with
Cascade net increased by 27.579%. With FeedForward network, accuracy of Arrhyth-
mia classification increased by approximately 63%, while with Fit net, Arrhythmia
classification accuracy increased by 54.54%. With Pattern net, arrhythmia classifi-
cation accuracy increased by 50%.
6 Conclusion
The network was trained with multiple sets of neurons and gave the best result with
seven neurons in one input layer. FeedForward net and Fit net performed best while
classifying between arrhythmia with long-term AF, and long-term AF with sleep
apnea. All the four networks performed equally well while classifying between
long-term AF and supraventricular arrhythmia. FeedForward net performed best
while classifying between sleep apnea with supraventricular arrhythmia. Cascade
net provides the best performance when classifying between Arrhythmia and sleep
apnea. Table 3 compares the performance of the normal network multi-classification
and the proposed post-classification on binary classification respectively. The pro-
posed method performs better than the multi-classification network in all the cases
of Arrhythmia, normal sinus rhythm and long-term AF. Moreover it produces supe-
rior results than the multi-classification network when used with cascade and feed
Forward net in case of sleep apnea. It also performs better in case of supraventricular
arrhythmia when used with pattern net. Hence, it can be concluded that the proposed
method provides a comparatively efficient way to classify ECG signals among the 5
classes taken. It can also be concluded that feedforward net provides the best solution
in most cases when comparing the above diseases taken 2 at a time.
References
1. El-Khafif SH, El-Brawany MA (2013) Artificial neural network-based automated ECG signal
classifier. ISRN Biomed Eng
2. Hamiane M, Ali MH (2017) Wavelet-based ECG signal analysis and classification, World
Academy of Science, Engineering and Technology (WASET). Int Sci Index, Comput Inf Eng
11(7). https://www.waset.org/publications/10008031
3. da Silva Pinho AM, Pombo N, Garcia NM (2016) Sleep apnea detection using a feed-forward
neural network on ECG signal. In: 2016 IEEE 18th International Conference on e-Health
Networking, Applications and Services (Healthcom). Munich, pp 1–6
4. Pan J, Tompkins WJ (1985) A real-time QRS detection algorithm. IEEE Trans Biomed Eng
BME-32(3) 230–236 1985
5. https://www.researchgate.net/publication/305886549/figure/fig17/AS:392760478715909@1
470652803522/Fig-6-Features-of-ECG-signal.png
6. Gupta KO, Chatur PN (2012) ECG signal analysis and classification using data mining and
artificial neural networks. Int J Emerg Technol Adv Eng 2(1)
7. https://dmm613.files.wordpress.com/2014/12/single_hidden_layer_ann.jpg
8. Mannurmath JC, Raveendra M (2014) MATLAB based ECG signal classification. Int J Sci
Eng Technol Res (IJSETR) 3(7)
326 R. Karthik et al.
9. Jambukia SH, Dabhi VK, Arshadkumar H, Prajapati B (2015) Classification of ECG signals
using machine learning techniques: a survey. In: 2015 International conference on advances
in computer engineering and applications (ICACEA) IMS Engineering College, Ghaziabad,
India ©2015 IEEE
10. ECG Signal Analysis Using Artificial Neural Network, Poonam Sao, Rajendra Hegadi, Sanjeev
Karmakar, International Journal of Science and Research (IJSR), 2015
11. Raj AAS, Dheetsith N, Nair SS, Ghosh D Auto analysis of ECG signals using artificial neu-
ral network. In: International conference on science, engineering and management research
(ICSEMR 2014)©2014. IEEE
12. Roza VCC, de Almeida AM, Postolache OA (2017) Design of an artificial neural network
and feature extraction to identify arrhythmias from ECG. In: 2017 IEEE Symposium Medical
Measurements and Applications (MeMeA)
13. Highlighting the current issues with pride suggestions for improving the performance of
real time cardiac heath monitoring, Information Technology in Bio-and medical Information,
ITBAM, 2010, Spain
14. Memić J (2017) ECG signal classification using artificial neural networks: comparison of
different feature types. In: IFMBE, vol 62
15. MIT Database [Online] Available: http://www.physionet.org/physiobank/database/mitdb
16. Sjöberg J, Viberg M (1997) Separable non-linear least-squares minimization—possible
improvements for neural net fitting. In: IEEE workshop in neural networks for signal pro-
cessing. Amelia Island Plantation, Florida, 24–26 Sept 1997, pp. 345–354
17. https://in.mathworks.com/help/nnet/ref/fitnet.html
18. Kim T (2010) Pattern recognition using artificial neural network: a review. In: Bandyopad-
hyay SK, Adi W, Kim T, Xiao Y (eds) Information Security and Assurance. ISA 2010. Com-
munications in Computer and Information Science, vol 76, Springer, Berlin, Heidelberg. The
Cascade-Correlation learning architecture Scott E. Fahlman Carnegie Mellon University Chris-
tian Lebiere
19. Lin CT, Juang CF (2001) An adaptive neural fuzzy filter and its applications. IEEE Trans Syst
Man Cybern 27(4):1103–1110
Automatic Phone Slip Detection System
Abstract Mobile phones are becoming increasingly advanced and the latest ones
are equipped with many diverse and powerful sensors. These sensors can be used to
study different positions and orientations of the phone which can help smartphone
manufacturers to track the handling of their customer’s phones from the recorded log.
The inbuilt sensors such as the accelerometer and gyroscope present in our phones are
used to obtain data for acceleration and orientation of the phone in the three axes for
different phone vulnerable position. From the data obtained appropriate features are
extracted using various feature extraction techniques which are given to classifiers
such as the neural network to classify them and decide whether the phone is in a
vulnerable position to fall or it is in a safe position.
1 Introduction
Human activity recognition system using devices like cameras or microphones have
become an active field of research. It has the potential to be applied in different
applications such as ambient-assisted living. So human activity recognition system
has become a part of our daily lives. Smartphones incorporate many diverse and
powerful sensors, which can be used for human activity recognition such as GPS
sensors, audio sensors (microphone), vision sensors (cameras), temperature sensors,
acceleration sensors (accelerometers), light sensors and direction sensors (magnetic
compasses). The data from these sensors can be transferred using wireless commu-
nication such as Wi-Fi, 4G, and Bluetooth [1]. We have seen that accelerometers and
gyroscopes have the most applications as they are the most accurate ones [2].
Studies have shown that activity recognition system using mobile phones are the
most extensively used topic in the research domain. Motion-based and location-based
activity recognition using inbuilt sensor and wireless transceivers are the dominating
types of activity recognition on mobile phones [3]. Under motion-based activity
recognition systems, three-axis accelerometers are the most used sensors available
on phones. Most of the studies focus on detecting the locomotion activities, such as
standing, walking [4]. Study on various phone positions and orientations and how
these positions change different parameters of inbuilt gyroscope and accelerometer
has been limited [5, 6]. When a phone is kept at a particular orientation or position
there many parameters associated with it. The location of the phone decides a lot
about its future [7].
Good results have been obtained using Ameva discretization algorithm and a
new Ameva-based classification system to classify physical activity recognition on
Smartphones [8]. On comparing the accuracy of human activity recognition, it was
found that using only a basic accelerometer gave an accuracy of 77.34%. However,
this ratio increased to 85% when basic features are combined with angular features
calculated from the orientation of the phone [9]. Human activity recognition using
accelerometer was done for some common positions and accuracy was around 91%
[10]. Hence an accelerometer alone cannot give very accurate results. But activity
recognition significantly increases the efficiency. In contradiction, [11] suggests that
this technique still needs a lot of research before it can be used for the general masses.
Using few preprocessing techniques, efficiency can be increased too but with many
limitations [12].
In this paper, we have focused on different phone positions which are considered
to be risky and harmful. Based on these risky positions various parameters such as the
roll, pitch, and azimuth changes. Whether the phone is at a slipping point or kept on a
table or kept on a book. All these factors would decide if the phone will be safe after
a certain movement or jerk is applied. And if the jerk moves the phone by a certain
distance, will the phone be still safe or there would be a wide change in orientation
of the phone which may result in the fall of it. To know all these things beforehand,
we have come up with an idea which will tell the user by the change in orientation
of the phone that whether the phone is safe or it is in a risky position. The most basic
Automatic Phone Slip Detection System 329
sensors that can be used for these cases are accelerometer and gyroscope. So here
we selected some cases which include normal touch, accident keep, complete slip,
slip till tipping point, flip and fall. For all these six positions, 20 samples each are
taken. The acceleration and orientation values for each sample are stored.
The data obtained was plotted which was then filtered and the appropriate features
were extracted [13]. Based on the features extracted, classification algorithm can be
implemented using machine learning so that the system can automatically classify
different positions. From [14], we got to know that people do consider many aspects
before placing a phone somewhere. So, based on those aspects, we selected the
various positions of the phone.
In Sect. 2, the methodology of how data was collected for various samples of
different phone slipcases and also procedure to generate the required database is
discussed. In Sect. 3, the procedure followed to extract features from the created
database is discussed. In Sect. 4, the method used to create a database from the
extracted features is mentioned. In Sect. 5, various machine learning classification
algorithms are discussed and also how these algorithms are used to classify various
phone slipcases is discussed with the observations obtained after implementing the
various classification algorithms is discussed and tabulated. In Sect. 6, the final
conclusion is drawn depending upon the results obtained.
2 Creation of Database
In our study, we considered six phone slipping cases: normal touch keep case, acci-
dental keep case, complete slipping case, slip till tipping point case, flipping case
and falling case. The following phone slipping case was chosen as these are the
most common ways by which a phone is vulnerable to fall or slip. The first case
which is the normal touch keep case is the reference case where the phone is just
kept on the table by the user and the observations are recorded while performing this
act. The second case is the accidental keep where the phone is thrown on a table or
chair in a violent way. The next case is the complete slipcase where the phone is
made to slip completely down a slope. The next case is slip till tipping point case
where the phone is placed on a slope and the reading are recorded until the point the
phone starts slipping. In the flipping case, the phone was flipped and thrown from
one point to another. The last case is the falling case where the phone was subjected
to a controlled fall from different heights. Phone position cases is shown in Fig. 1.
In order to create the required database, 20 samples were taken for each of the
phone slipping cases. As there were six cases, totally 120 samples. To collect the
samples for each case, MATLAB mobile application was used. For each of the
samples of the different cases, data was collected for the acceleration along the x,
y, and z-axes and also on the orientation- azimuth, pitch, and roll using the device
built-in sensors-accelerometer and gyroscope [1]. Angular projection of phone [15]
is shown in Fig. 2.
330 R. Karthik et al.
Figures 3 and 4 show a sample of accelerometer readings versus time and gyro-
scope readings versus time for the first case—Normal touch along the three axes.
The same procedure for the following five cases.
The data obtained from the phone was made into a database for further use. This
database was subjected to various feature extraction procedures. As a result of which,
necessary and important features were obtained.
Automatic Phone Slip Detection System 331
-5
-10
0 5 10 15 20 25 30 35 40 45 50
80
60
40
20
-20
-40
-60
0 5 10 15 20 25 30 35 40 45 50
3 Feature Extraction
One of the main objectives is the extraction of various features from the created
database of different phone slipcases. From the samples, the required features and
parameters are extracted for further classification purposes. The vast database was
transformed into a set of reduced set of features. The various feature extraction used
are as follows
• Mean—the average value of the magnitude of the various acceleration values and
orientation values for each sample of various cases was taken for the different
three axes.
• Variance—the average of the squared difference of the sample values from the
mean value which is taken independently for each of the three axes.
332 R. Karthik et al.
• Root Mean Square (RMS)—the square root of the sum of different sample values
was taken independently for each of the three axes.
• Zero crossing rate (ZCR)—the number of times acceleration samples change from
positive to negative and back is calculated.
• Fast Fourier transform (FFT)—the first five of the fast Fourier transform coeffi-
cients of the various acceleration samples for the three axes are taken.
From the database created, the total number of acceleration related features and
the orientation related features extracted from a sample of each case along all the
three axes combined was 54. Figure 5 shows the feature extraction process [16].
After extracting the necessary features from the various samples, the features were
arranged based on the samples values for different phone slipcases [17]. The features
were also arranged in a manner where all the features for all samples for one particular
Automatic Phone Slip Detection System 333
case were put together. To check the validity of the feature database created, the
concept of correlation was used. It was checked if the correlated value tended to 1
or 0 when different samples of same phone slipcase were considered. This was done
to check that all the sample value for each case were similar to each other or not and
to check if the samples value for different cases were not similar to each other.
In our paper, six phone slip cases were considered, namely normal touch keep (A),
accidental keep (B), complete slip (C), slip till tipping point (D), flip (E), and fall
(F). The extracted feature database was created and it was subjected to various clas-
sification algorithms were used to classify these cases. The number of training and
testing feature has been presented in Table 1.
We used four major neural network classifiers to observe the results, they are
Feedforward [18], Pattern NET, Fit NET, and Cascade NET. Pattern Recognition
network uses the basic neural network for grouping or classifying the patterns present
in the dataset. In this paper we used pattern recognition for classifying the pattern
among different slipcase and training has been done by proving predefined targets in
a supervised way. Fitness network is basic feedforward network which uses a genetic
algorithm to tune the learning parameter, this network can be used for solving both
regressions as well as for classification. Here it is used for classification purpose by
providing 70% data for training and the rest 30% for testing. In Cascaded Neural
Network, each layer neurons are interconnected with its preceding, succeeding, and
input layer neurons which produces confined outputs since the network has number
of weights due to more interconnection neurons, the memory and time required for
training are comparatively higher than other networks.
The total features extracted from all the samples in every case is 6480. The total
testing and training features are 1944 and 4536 respectively as shown in Table 1. The
results obtained after implementing these neural networks on the extracted features
are summarized in classification performance table (Table 2). In Table 2, the various
neural network was implemented on pairs of the phone slipcases and the classification
accuracy in percentage was observed. Then, for the particular neural network, the
average of classification percentages was calculated to indicate which of the neural
network can classify various phone slipcases. Also, the average of the classification
percentage for each sample after implementing in the various neural networks were
calculated to identify which phone slipcase can be easily classified. From Table 2, it
can be inferred that the classification accuracies in percentages among various pairs
of the phone slipcases is maximum when pattern net neural network is employed
because the classification percentages for a particular pair after employing every
neural network is maximum in Pattern Net and the average of the classification
percentages for various cases is maximum in Pattern Net. Moreover, the recognition
on AD shows more than average results when compared to other cases due to high
accuracy rate in Pattern Net. The highest 100% accuracy can be found in Pattern
Net on AF samples since cascaded Net show 25% lesser than AD the average on
cases reduces by 2.083 units. In some case like DF, BE, and DE, Cascade Net
performs better than the other networks with accuracies of 66.66%, 66.66%, and
58.33% respectively. Finally, the overall ranking of four networks has been given as
1-Pattern Net, 2-Cascade Net, 3-Feedforward Net, and 4-FitNet. Since Fit Net has
learning factor in both neural net and GA, therefore, the accuracy of the Fit net can
be increased by increasing the training samples.
6 Conclusion
From the results obtained, we can conclude that after employing the Pattern Net on
the extracted features, the classification accuracies for different pairs of the phone
slipcase is maximum and also, on the whole, the average classification accuracies for
pattern Net for all the cases is maximum (69.998%). Therefore, it can be concluded
Automatic Phone Slip Detection System 335
that out of the four used neural networks, the Pattern net can more efficiently classify
various phone slipcases so say if a particular phone slip position is vulnerable or
safe.
In future, we plan to improve our phone slip recognition system in several ways.
First, the efficiency of the project can be improved if it can classify various complex
phone positions. Second, various additional and more sophisticated features can be
extracted from various samples of different chosen cases to improve the classification
accuracy. The work presented in this paper is a part of a larger effort to classify various
phone positions into vulnerable and safe positions. Mobile phones are becoming
increasingly advanced and the inbuilt sensors present in them can be configured in a
way to identify if the phone is in a vulnerable position to fall or it is in a safe position.
References
15. https://in.mathworks.com/products/matlabmobile.html#acquiredatafromsensors
16. Lester J, Choudhury T, Borriello G (2006) A practical approach to recognizing physical activ-
ities. In: Lecture notes in computer science: pervasive computing, pp 1–16
17. Krishnan N, Colbry D, Juillard C, Panchanathan S (2008) Real time human activity recognition
using tri-Axial accelerometers. In: Sensors, signals and information processing workshop
18. Plummer EA (2000) Time series forecasting with feed-forward neural networks: guidelines
and limitations, July 2000
Surface Charging Analysis of GSAT-19
Using NASCAP
1 Introduction
The spacecraft located in geosynchronous orbit falls under the Van Allen radiation
belt, which is the region of highly energetic and low-density neutral plasma. The
plasma remains neutral, however, the negatively charged electrons and positive ions
are present in neutral plasma. It has been observed that the electronic systems of
spacecraft undergo unwanted switching. With the studies of plasma behavior and its
effect on spacecraft, it is known that the Geo Plasma environment charges the space-
craft surfaces to negative voltages in kilovolts with respect to surrounding plasma
[1]. This negative charge builds up on spacecraft surfaces results in electrostatic dis-
charges. This electrostatic discharge can couple to the wanted signals and causes
anomalies. During the geomagnetic storms, a huge amount of electrons and ions are
pushed into the orbital region of spacecraft and this enhances the spacecraft surface
charging phenomena. This process of surface charging of geosynchronous spacecraft
by geomagnetic substorm is called “spacecraft charging” [2].
Although the accumulation of charge on the spacecraft is not by itself a serious
ESD design concern, such charging will enhance the contamination of thermal control
surfaces and degrades their thermal properties. Also, since the spacecraft surfaces are
not always uniform in their material properties and since sunlight can illuminate only
one side at a time, there will be always some differential charging as well as absolute
charging. The effect of spacecraft surface charging is such severe that spacecraft
functionality could be disturbed, switching of electronic circuitries and the failures
could occur. An ESD can occur, if the differential voltage between the different
surfaces of spacecraft crosses the breakdown threshold, and this ESD can cause the
electromagnetic interference with the electronic circuitry and becomes responsible
for spacecraft anomalies and may cause the system failure.
Since the discovery of the phenomenon of the spacecraft charging, many efforts
have been put to develop the analytical models for the prediction of spacecraft-
charging effects [3–5]. Environmental models (describing the magnetospheric sub-
storms in terms of electron and ion concentrations, particle energies and probabilities
of occurrences of various severities of substorm activity) and spacecraft-charging
models (describing the interaction of the spacecraft with the Geo Plasma environ-
ment) are prominent among these. The most detailed spacecraft-charging model
available to date is NASCAP developed by S3 (Systems, Science, and Software Inc.)
for NASA [6, 7]. NASCAP can analyze the surface charging of a three dimensional,
complex body as a function of time for a given space environmental condition and
specified surface geometry. Material properties of surfaces are included in the com-
putations. Surface potentials, low energy sheath properties, the potential distribution
Surface Charging Analysis of GSAT-19 Using NASCAP 339
in space, and particle trajectories are computed. This paper gives in detail about
the on-orbit charging analysis of GSAT-19 satellite using NASCAP for different
environmental conditions.
The Maxwellian distribution function is used to represent the plasma at the geosyn-
chronous earth orbit in mathematical terms, which provides the measure of the num-
ber of particles (electrons/ions) in an infinitesimal volume of space with a given
velocity range. It is observed that the particle temperature for the worst-case envi-
ronment is always greater than that for the normal environment, implying that the
level of charging in the worst-case environment is always greater than that in nor-
mal environment condition. The worst-case Plasma environment is represented by
a single population with an average value of the number density and velocity (or
energy content or temperature) of the particles and the distribution function is called
single Maxwellian. For the analysis of surface charging of GSAT-19, worst-case
plasma conditions, i.e., Single Maxwellian environment has been considered. The
parameters of single Maxwellian are defined in Table 1.
The charging analysis has been carried out for GSAT-19 spacecraft for sunlit-eclipse-
sunlit passage under the worst-case environmental conditions for Single Maxwellian
worst-case environment. The spacecraft is assumed to be in the sunlit condition and
uncharged initially. Charging commences at t 0 s and is allowed to charge until t
7200 s (2 h). At the end of 7200 s, it is assumed that the spacecraft enters into the
eclipse condition and charging under eclipse condition is continued until t 4363 s
(typical eclipse time, 72 min 43 s). When the spacecraft emerges from eclipse, it is
observed that the surfaces once again attain the potentials corresponding to the sunlit
conditions.
340 A. Damodaran et al.
• The PAA support structure is covered with ITOC MLI and reflecting surface is
white painted (TiO2 ).
• CERM (Ceramic) cloth on LAM deck.
• The yoke material is CFRP.
• Solar array substrate is CFRP and solar cells of CERS (Cerium-doped Silicon with
MgF2 coating) on the front side of the solar array. SPN01 and SPS01 Kapton on
solar cell side for the non-populated area.
Sunlit/Shadow-Eclipse-Sunlit/Shadow Charging Analysis for Worst-Case Sin-
gle Maxwellian Plasma: In this simulation, the spacecraft passes through
Sunlit/Shadow-Eclipse-Sunlit/Shadow condition as defined in charging sequence.
The potential developed on surface material has been studied in this analysis. The
potential plots have been generated for a selected number of cells (Figs. 3, 4, 5, 6, 7,
and 8).
The differential potential acquired by different surface cells have been computed
and summarized in Table 2.
The surface charging profiles for Germanium, ITOC, Aluminum, CFRP, and Sil-
ver (which are conductive in ESD sense) are identical in sunlit/shadow-eclipse-
sunlit/shadow passage and hence, they do not show significant differential potential
with respect to structure (Al).
Materials such as Ceramic (CERM), RTV (Default SiO2 ), Kapton, White paint
(TiO2 ), and Solar cell cover glass (CERS) shows different surface charging profiles
Surface Charging Analysis of GSAT-19 Using NASCAP 343
Ceramic, SiO2 (Default), and Kapton charge to more negative absolute potentials
and CERS charge to less negative absolute potential compared to Aluminum.
The SPS01 and SPN01 solar panels have a major area as Kapton exposed. The
charging analysis is carried out and it was observed that differential potential on
344 A. Damodaran et al.
Kapton cells are in the range of −8.38 to −12.1 kV and differential potential further
increases in negative. The high differential potential on Kapton cells will cause the
ESD on the solar panel and no discharge path is available.
Hence, various following options on the Solar panel for SPN01 and SPS01 were
analyzed in GSAT-19 surface charging analysis. In a detailed analysis, it was observed
Surface Charging Analysis of GSAT-19 Using NASCAP 345
Table 2 Differential potential acquired by the exposed surfaces with respect to structure (Al.)
Cell no. Material Absolute potential (kV) Differential potential (kV)
@ 7,200 s @ @ @ 7,200 s @ @
11,563 s 20,000 s 11,563 s 20,000 s
1 GERM −5.27 −9.84 −5.31 0.00 0.00 0.00
3 ITOC −5.27 −9.84 −5.31 0.00 0.00 0.00
45 TEFL −16.40 −18.90 −16.8 −11.1 −9.08 −11.5
50 TIO2 −6.10 −10.90 −7.92 −0.82 −1.09 −1.82
71 ALUM −5.27 −9.840 −5.31 0.00 0.00 0.00
78 CERM −9.39 −12.90 −9.45 −4.12 −3.03 −4.14
176 CERS −3.53 −6.16 −3.75 1.75 3.71 1.56
188 CFRP −5.27 −9.84 −5.31 0.00 0.00 0.00
336 KAPT_N/N1 −6.50 −13.8 −7.72 −1.23 −4.00 −2.41
383 KAPT_N2 −13.7 −19.0 −17.5 −8.38 −9.13 −12.1
376 KAPT_S/S1 −13.7 −19.0 −17.5 −8.38 −9.13 −12.1
327 KAPT_S2 −13.7 −19.0 −17.5 −8.38 −9.13 −12.1
3 Conclusion
• The differential potential on Kapton in Solar panel level NASCAP is run with
0.15 m × 0.15 m Conductive RTV grid with respect to the substrate is up to
−190 V.
• The conductive RTV grid configuration on the Solar panel is acceptable, as the
differential potential is of the order of −190 V with the 0.15 m × 0.15 m grid.
• The implementation by a subsystem of RTV conductive grid is for 0.1 m × 0.1 m,
and hence, a further reduction in differential potential is expected. The grounding
of RTV grid to substrate ground to be ensured by Solar panel subsystem at the
subsystem level.
References
1. De Forest SE, Mc Ilwain CE (1971) Plasma clouds in the magnetosphere. J Geophys Res
76(16):3587–3611
2. De Forest SE (1972) Spacecraft charging at synchronous altitudes. J Geophys Res 77:651–659
3. Inouye GT (1975) Spacecraft charging model. J Spacecr 12(10): 613–620
4. Katz I et al (1977) A three dimensional dynamic study of electrostatic charging in materials.
NASA-CR-135256
5. Prokopenko SML, Laframboise JG (1980) High voltage differential charging of geostationary
spacecraft. J Geophys Res 85(A8): 4125–4131
6. Stansard PR et al (1982) NASCAP programmer’s reference manual. SSS-R-5443
7. Mandell MJ, Stannard PR, Katz I (1993) NASCAP programmers reference manual. NASA-
CR-191044
8. Purvis CK, Garrett HB, Whittleesey AC, Stevens NJ (1984) Design guidelines for assessing
and controlling spacecraft charging effects. NASA-TP-2361
9. Drolshagen G (1994) List of materials and properties for charging simulation. ESA TOS-EMA
10. Koller LR, Bergess JS (1946) Secondary emission from germanium, boron and silicon. Phys
Rev 70(7–8):571
Performance Evaluation of Vedic
Multiplier Using Multiplexer-Based
Adders
Abstract Faster multipliers are the necessary elements in most of the applications
such as the Internet of Things (IoT), Image, and Digital Signal Processing applica-
tions. In the present scenario, Vedic multiplier using Urdhva-Tiryagbyam is preem-
inent in the performance evaluation of parameters such as area, power, and delay.
By observing the architecture of conventional Vedic multiplier, it is evident that per-
formance is still improvised by using modified half adders and full adders. Vedic
multiplier using modified adders is coded in Verilog HDL and to convey the simula-
tion and synthesis, XILINX ISE 12.2 software is used on Spartan 3E kit. In addition,
the proposed multipliers are compared with the Conventional Vedic multiplier in
terms of slices, LUTs, and combinational delay.
1 Introduction
From the ancient days, multipliers are being developed and to meet the speed require-
ments of several processors, also to satisfy the demand for less power consumption
and smaller area-efficient multipliers are needed. In recent times, Vedic multiplier is
found to be one of the fastest and low-power consumption multipliers. To improve
the performance level of existed Vedic multiplier, several modifications are still in
progress. Rama Lakshmanna et al. [1] proposed “Modified Vedic Multiplier using
Koggstone Adders” in which comparison is made between Carry-Select Adders
(CSLAs) and Koggstone Adders (KSAs) in terms of power consumption and mem-
ory occupation, where KSA architecture consumes less power but occupies more
memory than CSLA architecture. Moreover, the speed of CSLA is high when com-
pared to N-bit Ripple Carry Adder (RCA) due to a reduction in the carry propagation
delay [2, 3] by means of separately producing the many radices carries to select
between concurrently generated sums. The propagation delay of carrying Select
Adder is further reduced by designing the CSLA with multiplexers [4], even the
speed is high but the amount of hardware required to design CSLA adder is more,
so CSLA consumes more area [5] to design than N-bit RCA. Ram et al. [6] pro-
posed “Area-Efficient Vedic Multiplier” in which BEC adder is used to reduce the
area by replacing input carry value “1” by Binary to Excess code converter in the
normal CSLA structure so that BEC adder requires only a small number of logic
gates, moreover Vedic multiplier using BEC adder occupies the less memory when
compared to conventional Vedic multiplier using CSLA. The area is further reduced
by using zero finding logic in CSLA [7] instead of RCA with input carry is equal
to 1, but with an increase in delay. Various designs of full adders using multiplexers
and logic gates in CSLA architectures [8] proved that performance is efficient than
conventional architectures. It has also been proved that [9] number of slices and
LUTs in the Vedic multiplier structure using modified full adders is less compared to
Normal Vedic Multiplier. So far, various designs of full adders using multiplexers are
discussed. Even then, the performance can be still improvised by using Multiplexer
based adders.
The rest of the paper is structured as follows. Section 2 deals with the proposed
methodology of the multiplexer-based adders. Section 3 presents the Simulation
Results. Comparisons of the normal Vedic multiplier and modified Vedic multipliers
are given in Sect. 4. Finally, the work is concluded in Sect. 5.
2 Proposed Methodologies
As shown in Fig. 1, the proposed structure of Vedic multiplier using MBHA con-
sists of four similar sizes 8 × 8 modified Vedic multiplier blocks. The 16-bit input
sequences, i.e., multiplicand and multiplier sequences are partitioned into two 8-bit
numbers, i.e., the first 16-bit sequence a[15:0] is partitioned as a[15:8] and a[7:0]
and another sequence b[15:0] is partitioned as b[15:8] and b[7:0]. Now, the resultant
outputs from each multiplier block are given as inputs to the Ripple Carry Adder
with Multiplexer-Based Half Adder (MBHA) of 16-bit. Eventually, the final section
in the modified architecture consists of RCA with MBHA of 23-bit size.
Performance Evaluation of Vedic Multiplier … 351
From Fig. 2 to generate the sum and carry values, two 2:1 multiplexers are used with
the common selection line “a”. Inputs for the first 2:1 multiplexer are “b” and “b̄”
and the output is “s”, whereas inputs for the second 2:1 multiplexer is “0” and “b”
and output is “c”. If the value of selection line, i.e., “a” is equal to “0” then the value
of input “b” is transferred to sum output “s” and “0” is transferred to carry output
“c”. If the selection line “a” is “1” then “b̄” is fed to the output “s” and “b” is fed to
the output “c”.
In the design of 16 × 16, Vedic multiplier using Multiplexer-based Half Adder and
Modified Full Adder (MBHA and MFA), the major building block is the 8 × 8 mod-
ified Vedic multipliers which are designed by using MBHA and MFA. The four
similar multipliers produce the 16-bit intermediate products and these intermediate
products are added by 16-bit Ripple Carry Adders (RCAs) with MBHA and MFA
as shown in Fig. 3. Further, the outputs from these Ripple Carry Adders are again
connected to Ripple Carry Adder with MBHA and MFA of 23-bit size.
S. Murugeswari and Dr. S. Kaja Mohideen have suggested the Modified Full Adder
(MFA) to improve the performance of Vedic multiplier in terms of area [10]. A single
XOR gate and two 2:1 multiplexers are used in this modified full adder design. Inputs
for the first multiplexer are “a” and “ā” and selection line is “y”, this selection line is
obtained by performing the XOR operation for the inputs “b” and “c in” and inputs
for the second multiplexer is “a” and “b” with the same selection line “y”. The output
acquired from the first multiplexer is s; whereas the output resulted from the second
multiplexer is “c”. Schematic Diagram of Modified Full Adder (MFA) is given in
Fig. 4.
3 Simulation Results
Implementation of the proposed architectures are carried on Xilinx ISIM tool for
simulation on an INTEL core2 (TM) Duo I2 processor, 32-bit operating system, RAM
2 GB with 2.93 GHz clock frequency. Simulation results for normal and modified
architectures for 16-bit are presented in Figs. 5, 6 and 7.
Fig. 7 Simulation result for 16-bit Vedic multiplier using MBHA and MFA
The inputs for 16-bit Normal Vedic Multiplier a[15:0], b[15:0] are taken as
“1111000011110000”, “0000111100001111” and the obtained output c[31:0] is
“0000111000011000010111000010000”.
The inputs for 16-bit Vedic Multiplier using MBHA a[15:0], b[15:0] are taken
as “1111111111111111”, “1010101010101010” and the obtained output c[31:0] is
“10101010101010010101010101010110”.
The inputs for 16-bit Vedic Multiplier using MBHA and MFA a[15:0], b[15:0]
are taken as “1111111111111110”, “1100101001010101” and the obtained output
c[31:0] is “11001010010100110110101101010110”.
4 Comparisons
From Fig. 8, it has been observed that the number of slices is increased by 4.5% and
the number of LUTs is increased by 4.02% but the combinational delay is reduced
by 1.05% in Vedic Multiplier using MBHA than the normal Vedic multiplier for
8-bit. Also, it is observed that in the modified architecture of 8-bit Vedic Multiplier
using MBHA and MFA, the number of slices and LUT’s are less by 4.71% and
4.18%, respectively, but the combinational delay is increased by 9.2% compared to
the normal Vedic multiplier.
It is also observed from Fig. 9 that compared to normal Vedic multiplier, the
number of slices and LUTs are greater by 5.7% and 4.17%, respectively, and combi-
Table 1 Performance analysis for 8-bit Vedic multiplier using MBHA and modified full adders
Parameter Normal Vedic Vedic multiplier using Vedic multiplier using
multiplier multiplexer-based half MBHA and modified
adder full adder
No. of slices 106 111 101
No. of LUTs 191 199 183
Combinational delay 22.16 21.926 24.409
(ns)
Table 2 Performance analysis for 16-bit Vedic multiplier using MBHA and modified full adders
Parameter Normal Vedic Vedic multiplier using Vedic multiplier using
multiplier multiplexer-based half MHBA and modified
adder full adder
No. of slices 456 484 446
No. of LUTs 826 862 803
Combinational delay 40.895 40.624 41.585
(ns)
national delay is reduced by 0.66% in the Vedic Multiplier using MBHA for 16-bit.
In Vedic Multiplier using MBHA and MFA, the total number of slices and LUTs are
decreased by 2.24% and 2.78%, respectively, and combinational delay is increased
by 1.65% when compared to the normal Vedic multiplier for 16-bit.
Furthermore, Tables 1 and 2 shows the analysis details of the Normal Vedic
multiplier and the modified Vedic Multipliers using Multiplexer-based adders in
terms of slices, LUTs and combinational delay for 8-bit and16-bit, respectively.
356 N. Udaya Kumar et al.
5 Conclusions
References
1. Rama Lakshmanna Y, Padma Rao GVS, Bala Sindhuri K, Udaya Kumar N (2016) Modified
vedic multiplier using koggstone adders. Int J Adv Res Comput Commun Eng ISO 3297:2007
Certified, 5(10):361–371
2. Mohanty BK, Patel SK (2014) Area-delay-power efficient carry select adder. IEEE Trans
Circuits Syst I Express Briefs 61(6). https://doi.org/10.1109/tcs11.2014,2319695
3. Bedrij OJ (1962) Carry select adder. IRE Trans Electron Comput EC-11(3):340–344. https://
doi.org/10.1109/iretelc.1962.5407919
4. Sajesh Kumar U, Mohamed Salih K, Sajith K (2012) Design and implementation of carry
select adder without using multiplexers. In: 2012 1st international conference on emerging
technology trends in electronics, communication and networking (ET2ECN), pp 1–5, 19–21
Dec 2012. https://doi.org/10.1109/et2ecn.2012.6470067
5. Ramkumar B, Kittur HM, Kannan PM (2010) ASIC implementation of modified faster carry
save adder. Eur J Sci Res 42(1):53–58
6. Ram GC, Sudha Rani D, Rama Lakshmanna Y, Bala Sindhuri K (2016) Area efficient modified
vedic multiplier. In: International conference on circuit, power and computing technologies,
pp 1–5 Mar 2016. https://doi.org/10.1109/iccpct.2016.7530294
7. Kandula BS, Vasavi KP, Prabha IS (2016) Area efficient VLSI architecture for square root
carry select adder using zero finding logic. Proc Comput Sci 89, 640–650
8. Anirudh Kumar Maurya K, Bala Sindhuri K, Rama Lakshmanna Y, Udaya Kumar N (2017)
Design and implementation of 32-bit adders using various full adders. In: International con-
ference on innovations in power and advanced computing technologies (i-PACT 2017), pp
1–6
9. Bandi VL (2017) Performance analysis for vedic multiplier using modified full adders. In:
International conference on innovations in power and advanced computing technologies (i-
PACT2017), pp 1–3
10. Murugeswari S, Kaja Mohideen S (2014) Design of area efficient and low power multiplier
using multiplexer based full adder. In: 2014 International conference on current trends in
engineering and technology, ICCET, Coimbatore. pp 388–392. https://doi.org/10.1109/icctet.
2014.6966322
Feature Extraction Techniques
for Aggressive and Nonaggressive
Metabolic Conditions of Brain
1 Introduction
A brain tumor is the growth of cells abnormally in the brain. They may be cancer-
ous or noncancerous. Tumors of brain arise from the brain or spinal cord. These
types of tumors are called primary tumors. Tumors may spread from another pri-
mary site of cancers to the brain, and these types of tumors called secondary tumors
or metastatic. Primary tumors may be either nonaggressive (benign) or aggressive
(malignant). Secondary tumors are aggressive tumors and usually occur from lung
cancers. There are two types of brain tumors one is nonaggressive and another one is
aggressive. Benign tumors are noncancerous, slow growing, and they do not invade
surrounding tissues. Whereas, malignant tumors are cancerous, fast growing, and
these cells invade surrounding tissues. This visually appears in medical imaging
systems. There are many existing imaging techniques that are available in medical
sciences and out of that MRI plays a vital role. Because MRI provides clear informa-
tion about the tissue’s metabolic conditions of tumors and this supports the diagnosis
process, and also for conventional anatomic imaging of the brain’s aggressive and
nonaggressive conditions. To extract hidden information through medical imaging
is difficult to identify the features [1]. To support the radiologists and physicians for
diagnosing the disease, this paper introduces two different types of feature extrac-
tions techniques. These methods are applied in many applications, i.e., separation
of nonidentical objects, segmentation and compressions, etc., these techniques are
based on a study of variations of pixel intensity of the image. Due to this, the physi-
cal qualities of the image are transformed into statistical and geometrical techniques.
To obtain an effective feature subset by feature selection, the original feature set
must be sufficient [2, 3]. The statistical method transforms image intensities based
on the number of pixels, defining the local feature of an image. These features pro-
vide homogeneity and non-homogeneity conditions of tissues which are based on
contrast, cluster shade, distance of image elements, direction of angles, etc., and
these values are tabulated in Table 3. Whereas in geometrical approach, the abil-
ity to predict aggressive and nonaggressive tumors depends upon its geometrical
shape, cell metabolic conditions, and one of the important factors is lumpiness of
the affected area; these values are summarized in Tables 1 and 2. The investigated
results are compared with clinical results and obtained the optimum features’ values
of both aggressive and nonaggressive lesions. For this analysis, some known tumor
images are considered, and obtained optimum feature value. This method has been
implemented on unknown brain tumor images.
process. There are different ways of feature extraction and selection methods and
some of them are adopted for identification of MRI brain abnormal conditions. The
extracted features are generally based on image texture, shape, spatial structure,
contrast, roughness, and some other properties of the image. For medical images,
maximum information is obtained from the texture, i.e., pixel intensity variations.
For this reason, it mainly focuses on second-order statistical techniques and fractal
geometrical feature extraction techniques are considered. Texture analysis is surface
property and it concerns with the study of variation in the intensity of image elements
(pixel) values and pixel coordinates acquired under certain conditions [1, 5]. For this
analysis, the image intensities are transformed to the fractal geometry domain and
second-order statistical technique [1].
360 S. Velugubantla and G. Sasibhushana Rao
where i and j are horizontal and vertical directions of the image and A(i, j) is the
normalized gray-level co-occurrence matrix.
The corresponding dissimilarity values is provided by dissimilarity parameter
Dissimilarit y |i, j|A(i, j) (3)
i, j
Dissimilarity indicates the difference in pixel intensity; this value is high when
metabolic cells are low. This refers the nonaggressive tumors have low metabolic
cells. If dissimilarity is low for the aggressive case, it gives more cancerous cells in
aggressive case.
Fractal geometry technique (FGT) is a set of geometry used to find the dimensions
of an object. This method was developed mathematically by Hausdorff–Besicovitch.
Feature Extraction Techniques for Aggressive … 361
Differential box-counting dimension is the most widely used dimension method [9].
The role of fractal geometry is the unconventional view of scaling and dimensions
[10]. In this paper, the differential box-counting dimension is used to extract the fea-
tures and the range of linear scale and box height are also considered. This experiment
is performed on two different kinds of brain diseases; aggressive and nonaggressive
tumors.
Nr D 1
(or) (4)
log N
D log(1/r )
where N r is the number of self-similarity and r is a scaling factor. The role of scaling
is mathematically defined by the general scaling rule.
N ∝∈ (5)
where MI is window maximum intensity, multiplication of gray levels and this can
be denoted by l. H is different gray-level intensities for each window.
3 Lacunarity Feature
Lacunarity is the feature to extract lump area feature values from the image. This
mainly depends on tissue’s metabolic conditions of an object. The fractal geometry
is a primary one in various complex textures. However, these values are insuffi-
cient, because two different surface’s geometrical values are similar. To overcome
this problem, Mandelbrot introduced the term called lacunarity which quantifies the
denseness of an image surface. The basic idea of this term is to quantify the gap
presented in the image [12]. If lacunarity value is high, the cells are nonhomoge-
nous and this value is low for homogenous conditions. If the cells are homogenous,
they are aggressive tumors. If the cells are nonhomogenous, they are nonaggressive
tumors. In aggressive case, the affected area occupies more chronic cells in different
stages but at every stage, the character of cells occupies surrounding healthy tissues
and spreads in an irregular form. This process, which occurs in a cell or organism, is
necessary for the maintenance of life. This refers to the cells in the aggressive case
are stronger than in the nonaggressive case. The following lacunarity is defined as
the ratio of the variance over the mean value of the function.
M−1 N −1
1 I (m, n)2
L −1 (7)
M N m0 n0 I (k, l)2
where M and N are the size of the fractal dimensions of an image, k is a constant,
and the box of the side is l.
Table 1 shows nonaggressive lesions of brain metabolic system. The average geo-
metrical features are low when compared to the aggressive case and the corresponding
lacunarity is high. This indicates that the cells are non-aggressive, nonhomogenous,
and low metabolic conditions. Lacunarity feature gives lumpiness values of the tumor.
More the lacunarity, more gaps are in affected area. The lifetime of these cells is less
when compared to aggressive and these are non-chronic cells. Differential box height
is also an important factor. These values are based on the scaling factor of an image.
From the above results, the average fractal value of 1.6783 and 0.0351 are its high
lacunarity values, these are obtained by using box height 0.0375. In nonaggressive
tumors, lacunarity is high when compared to aggressive tumors.
Table 2 shows the aggressive lesions of the brain and average geometrical fea-
tures are high when compared to nonaggressive and corresponding lacunarity is low;
this gives aggressive conditions. Because high metabolism conditions indicate more
aggressive cells are in the affected tumor area. These cells are chronic and long-life
cells. Since aggressive tumors have a low lump and high average fractal values. Dif-
ferential box height is considered below the value of 1, which depends upon scaling
Feature Extraction Techniques for Aggressive … 363
factor of the image. Lesser height gives a good result when compared to more height.
From the Table 2, the average fractal value of 1.8227 and 0.0325 are its lacunarity and
these optimum aggressive feature values are obtained by using box height 0.0310.
From the Table 3, it is observed that the homogeneity value is 9.8463 and the
corresponding dissimilar value is 1.2291, which indicates that aggressive tumors
are highly metabolic. These cells are aggressive in nature and occupy surrounding
lesions. Whereas in nonaggressive case, homogeneity value is 8.9808 and dissimi-
larity value is 9.3309, and the cells in the affected region are low metabolic cells in
nonhomogenous case, more homogeneity and less dissimilarity in aggressive case.
These cells are of high metabolic, long life, and chronic cells. In case of nonaggres-
sive cells, less homogeneity values are observed than the aggressive case. A low
value indicates that the cells are non-chronic and lifetime of a cell is less. More
dissimilarity indicates less metabolic conditions when compared to aggressive.
Figures 1 and 2 illustrate nonaggressive and aggressive tumors. From the Fig. 1,
it is observed that surrounding tumor lesions are of soft and regular shape, and this
refers to nonaggressive tumors looks like soft tissue cells and less life. Figure 2 shows
the lesions in the affected area are of irregular shape and occupy surroundings lesions
with high metabolic conditions. This shows aggressive cells are chronic and long-life
cells. This type of tumors is resolved by a surgical process only. The features of both
cases with different images are given in Tables 1, 2 and 3.
4 Conclusion
In this paper, two new feature extraction techniques are implemented to detect the
aggressive and nonaggressive tumors of MRI brain images, based on metabolic cell
conditions. When FGT is implemented; higher value of the average fractal dimension
of 1.8227 and lower lacunarity value of 0.0325 for aggressive tumor, lower value of
the average fractal dimension of 1.6783 and higher lacunarity value of 0.0351 for
nonaggressive tumors, are observed. These features clearly indicate that the aggres-
sive cells are in high metabolic state and nonaggressive cells are in the low metabolic
state. Whereas, the features homogeneity and dissimilarity obtained by the SST do
not significantly illustrate the difference between the aggressive and nonaggressive
state of the tumor, due to the insufficient feature extraction of the cell structure infor-
mation. The results are obtained by the FGT using the proposed two new features,
average fractal dimension and lacunarity, are useful in classifying the aggressive and
nonaggressive state of the brain tumors and it has got the high potential to improve
clinical diagnostics tests and pathological studies.
References
1. Al-Kadi OS, Watson D (2008) Texture analysis of aggressive and nonaggressive lung tumor
CE CT Images. IEEE Trans Biome Eng 55(7)
2. Zhou X, Wang J (2015) Feature selection for image classification based on a new ranking
criterion. J Comput Commun 3:74-Published online March 2015
3. Sankar D, Thomas T (2010) Fractal feature based on differential box counting method for
the categorization of digital mammograms. Int J Comput Inf Syst Ind Manag Appl (IJCISIM)
ISSN: 2150–7988, 2:011–019
4. Chen SS, Keller JM, Crownover RM (1993) On the calculation of fractal features from images.
IEEE Trans Pattern Anal Mach Intell 15(10)
5. Chen CC, Daponte JS, Fox MD (1989) Fractal feature analysis and classification in medical
imaging. IEEE Trans Med Imaging 8(2)
6. Korchiynel R, Farssi SM, Sbihi A, Touahni R, Alaoui MT (2014) A combined method of fractal
and GLCM features for MRI and CT scan images classification. Int J Signal Image Process
(SIPIJ) 5(4)
7. Aggarwal N, Agrawal RK (2012) First and second order statistics features for classification of
magnetic resonance brain images. J Signal Inf Process 3:146–153
8. Karthikeyan S, Rengarajan N (2014) Performance analysis of gray level cooccurrence matrix
texture features for glaucoma diagnosis. Am J Appl Sci 11(2):248–257, ISSN: 1546–9239,
Science Publications
9. Liu S (2008) An improved differential box-counting approach to compute fractal dimension
of gray-level image. In: International symposium on information science and engineering
Feature Extraction Techniques for Aggressive … 365
10. Padhy LN (2015) Fractal dimension of gray scale & colour image. Internal J Adv Res Comput
Sci Softw Eng 5(7)
11. kisan S, Priyadarsini M (2015) Relative improved differential box-counting approach to com-
pute fractal dimension of the grey-scale image. Int J Eng Res Gen Sci 3(1),
12. Purkait P, Chakravorti S (2003) Impulse fault classification in transformers by fractal analysis.
IEEE Trans Dielectr Electr Insul 10(1)
Modified RC4 Variants and Their
Performance Analysis
Abstract Two modified RC4 variants have been discussed in this paper. First, the
weaknesses in the basic RC4 algorithm have been identified and further, the weak-
nesses have been removed in the two proposed algorithms. The implementation of
basic RC4, RC4+ and the proposed variants RC4-1 and RC4-2 has been carried out in
both C++ and VHDL. The performance analysis has been done in terms of encryp-
tion time, security, and resource usage. From the obtained numerical values, it is
found that the proposed algorithms provide better security without increasing the
encryption time of the basic RC4 algorithm.
1 Introduction
RC4 runs in two phases [4]. In the first phase, key scheduling is performed using Key
Scheduling Algorithm (KSA) and in the second phase, a random byte is generated
using Pseudo-Random Generator Algorithm (PRGA). KSA initializes the permuta-
tion a key of length between 40 and 256 bytes. After KSA, pseudo-random bits are
generated by PRGA. These bits are then used for encryption by combining them
with plain text bits using bitwise XOR operation. Encryption and decryption are
performed in a similar way. Keystream is generated by using an internal state[S].
Algorithm for conventional RC4 is presented in Algorithm 1.
Algorithm 1. Basic RC4
KSA PRGA
for i 0 to N – 1 do i0
S[i] i j0
T[i] k[i mod keylength] loop
end for i (i + 1) mod N
j0 j (j + S[i]) mod N
for i 0 to N–1 do swap S[i], S[j]
j (j + S[i] + T[i]) mod N t (S[i] + S[j]) mod N
swap S[i], S[j] key S[t]
end for end loop
Previous Work. In 1995, the first RC4 weakness concerned with KSA was dis-
covered by Roo’s [5]. It was observed that perfectly random distribution has not been
obtained after RC4 KSA. It was investigated that no swapping was performed when
i j. In [6], it was discovered that the statistical weaknesses in the keystream. For
the first few bytes, the output keystream is strongly nonrandom and makes the key
vulnerable to security attacks. More correlations between the RC4 keystream and
the key were observed in [7]. Further, the authors in [8] observed that the second
output byte of the RC4 was biased towards zero with a probability of 1/128, instead
of 1/256. A highly secure available RC4 variant is RC4+ [3], having complex three-
layer KSA, taking about three times more encryption time and a complex PRGA,
taking about 1.7 times more execution time as compared to basic RC4. In RC4+, the
first layer of KSA is the same as conventional RC4. In the second layer, more scram-
bling is carried out using IVs. In the last and final layer, keystream is scrambled in a
zigzag manner, where the keystream bytes take values in the order: 0, 255, 1, 254, 2,
Modified RC4 Variants and Their Performance Analysis 369
253… Though the algorithm is known to very secure but with increased complexity,
in turn, increased execution time and is not desirable in RC4. The elaborated discus-
sion along with the pseudo code of RC4+ is presented in [3, 9]. Various correlations
of PRGA have also been presented in [10]. A number of RC4 variants are available
in the literature [9], but till date, the algorithm is susceptible to a number of security
attacks.
Our Contributions. In this paper, we have modified the basic RC4 algorithm and
KSA+ to improve the security without increasing the encryption time. This modifica-
tion is done by modifying the mathematical computations performed in the algorithm
and by adding more randomness, achieved through more scrambling in KSA+. It is
evident from the modifications that for every value of i and j, a swapping operation
is performed and makes the keystream more random and less vulnerable to attacks.
Structure of the Paper. Section 2 briefly elaborates the Roos’ biases and the
KSA+ algorithm. Section 3 explains the proposed modifications and the design strat-
egy used to implement the design in VHDL. Section 4 provides a comparative timing
analysis of the parameters obtained from implementations in C++ and VHDL. The
conclusion is drawn in Sect. 5.
A number of weaknesses were observed in RC4, both in KSA and PRGA. In this
paper, we have focused on Roo’s biases.
Roos [5] investigated that the RC4 KSA did not exactly yield a perfectly random
distribution because of the following reasons:
1. In the initial permutations of KSA, it is highly likely that S[i] i.
2. If an index has been swapped by i, it is highly likely that it will not be swapped
again. If we consider 256 bytes, the probability that an index is chosen at random
by j is 1/256 and the probability it will not be chosen is therefore 255/256. The
probability that it would not be chosen again during all the iterations of KSA
is (255/256)256 . If i j, no swapping will be performed and this corresponds to
approximately 37% [6].
Because of the above weakness, first, several thousand outputs from the PRGA
should be discarded to get the state table into a more even distribution.
370 P. Jindal and S. Makkar
3 Proposed RC4
In this paper, two RC4 variants have been proposed RC4-1 and RC4-2 to increase
the security of the algorithm. The available literature depicts that RC4 is the sim-
plest available stream cipher without incurring any overhead cost. While keeping in
mind, the simplicity of RC4, RC4-P1 has been proposed. It provides security without
incurring any overheads in terms of encryption time. RC4+ is a highly secure variant
of RC4. But, a number of attacks are possible till date. RC4-P2 has been proposed
to overcome the weakness of KSA+ by increasing the depth of scrambling.
3.1 RC4-P1
KSA-P1 PRGA-P1
for i 0 to N – 1 do i0
S[i] i j0
T[i] k[i mod keylength] loop
end for i (i + 1) mod N
j0 j (j + S[i]) mod N
for i 0 to N – 1 do swap S[i], S[i + j +1]
j (j + S[i] + T[i]) mod N t (S[i] + S[j]) mod N
swap (S[i], S[i + j +1]) Output Z S[t]
end for end loop
3.2 RC4-P2
scrambling operation in more depth as shown in Eq. (1), where, the index i takes
values in the order: 0, 255, 254, 1, 253, 252, 2… In general, if y varies from 0 to N
− 1 in steps of 1, then
⎧
⎪
⎪
y
, y 0 mod 3
⎪
⎨ 3
RC4-2 KSA
(Layer 1 and layer 2 are similar to KSA+)
for y 0 to N – 1 do if y 0 mod 3
i y/3
else do if y 1 mod 3
i N – (y + 2)/3
j j + S[i] + K[i]
swap S[i], S[j]
end for
The proposed RC4-2 further enhances the keystream randomness. It prevents the
formation of recursive equations linking the keybytes and the permutation bytes.
Further as the KSA runs only once in RC4, and the added operation does not affect
the performance of the cipher. It introduces no extra time as compared to RC4+ but
improves the security of the cipher.
RC4 and its proposed variants RC4-1 and RC4-2 in C++ and VHDL using basic
coding techniques have been implemented in this paper. Performance analysis is done
in terms of execution time, security analysis, and resource usage by each algorithm.
Execution time is the time consumed by the algorithm to convert plaintext into
ciphertext. The encryption time for basic RC4, RC4+, RC4-1, and RC4-2 have been
analyzed in both C++ and VHDL. For C++ implementation, the time is the total run
time for 256 bytes including both KSA and PRGA. For VHDL implementation, the
time for PRGA is calculated for 12.5 million bytes. Execution time for all the RC4
372 P. Jindal and S. Makkar
0.2
Time (sec)
0.1
0
RC4 KSA+ RC4-1 RC4-2
RC4 Variants
(a)
15
Time (ms)
10
5
0
RC4 KSA+ RC4-1 RC4-2
RC4 Variants
(b)
Fig. 1 Execution time for different RC4 variants implemented in a C++ b VHDL
variants is presented in Table 1 and Fig. 1. From the obtained numerical values, ii is
found that execution time for RC-1 is similar to that of conventional RC4. Similarly,
the time consumed by RC4-2 is similar to that of RC4+. It is worth mentioning that
the proposed RC4-1 and RC4-2 are providing better security without incurring any
extra time as compared to basic RC4 and RC4+.
In VHDL, Algorithmic State Machine (ASM) and Register Transfer Level (RTL)
design strategy have been used which, incorporates timing as we move from one state
to another in the ASM chart [11, 12]. The control path controls the state transitions
and the data path controls various operations that occur in various states. The control
Modified RC4 Variants and Their Performance Analysis 373
path can be easily implemented as a Finite State Machine (FSM) and the data path
can be implemented as a multiplexer (MUX). Since the KSA and PRGA consist
of many loops, to store the loop variables and other variables, registers are used to
store them. Due to the space constraints, the ASM charts have not been presented in
this paper. The hardware resource usage with all the RC4 variants is summarized in
Table 2. It is observed that the overall resource utilization with the proposed variants
is almost similar to the existing RC4 variants.
In this paper, the theoretical security analysis has been performed and presented in
Table 3. It is observed that increased layer of operations enhances the security of
cipher by removing the identified weaknesses. The obtained results demonstrate that
RC4-1 provides comparatively less security as compared to RC4+/RC4-1, but the
execution time is very low. Therefore, for the applications like in multimedia, where
security is not of much concern but the performance matters, in such scenarios RC4-
1 is recommended. Similarly, for the scenarios where, security is of major concern
as compared to the network performance such as e-commerce, e-transactions, etc.,
RC4-2 is recommended to use.
5 Conclusion
Newer vulnerabilities are being discovered in RC4 every now and then. This raises
the need for a better design that is simple, robust and computationally efficient. Our
proposed modifications are directed towards the goal of achieving better security
without compromising on timing efficiency of the original stream cipher RC4.
In the future, apart from exploring more weaknesses, work can be done by improv-
ing the energy efficiency of the stream cipher RC4 when it is implemented in hard-
ware. A trade-off between energy efficiency, complexity, and security can be ana-
lyzed.
References
1. Stinson DR (1995) Cryptography: theory and practice, 2005th edn. CRC Press, Boca Raton
2. Mantin I (2005) A practical attack on the fixed RC4 in the WEP mode. In: ASIACRYPT, vol
3788, pp 395–411
3. Maitra S, Paul G (2008) Analysis of RC4 and proposal of additional layers for better security
margin. In: INDOCRYPT, vol 5365, pp 27–39
4. Jindal P, Singh B (2015) RC4 encryption-a literature survey. Proc Comput Sci 46:697–705
5. Roos A (1995) A class of weak keys in the RC4 stream cipher
6. Fluhrer S, Mantin I, Shamir A (2001) Weaknesses in the key scheduling algorithm of RC4.
In Selected areas in cryptography, vol 2259, pp 1–24
7. Klein A (2008) Attacks on the RC4 stream cipher. Des Codes Crypt 48(3):269–286
8. Mantin I, Shamir A (2001) A practical attack on broadcast RC4. In: International workshop on
fast software encryption, pp 152–164. Springer, Berlin, Heidelberg
9. Jindal P, Singh B (2017) Optimization of the security-performance tradeoff in RC4 encryption
algorithm. Wirel Pers Commun 92(3):1221–1250
10. Gupta SS, Maitra S, Paul G, Sarkar S (2011) Proof of empirical RC4 biases and new key cor-
relations. In: International workshop on selected areas in cryptography, pp 151–168. Springer,
Berlin, Heidelberg
11. Galanis MD, Kitsos P, Kostopoulos G, Sklavos N, Koufopavlou O, Goutis CE (2004) Com-
parison of the hardware architectures and FPGA implementations of stream ciphers. In: 11th
IEEE international conference on electronics, circuits and systems, ICECS 2004, proceedings
of the 2004, pp 571–574. IEEE
12. Rane DB, Jyoti UF, Shwetal GR, Chandreshwari PB (2013) Hardware implementation of RC4
stream cipher using VLSI. Int J Electron Commun Soft Comput Sci Eng (IJECSCSE) 70
Wide Band Sierpinski Carpet
Rectangular Microstrip Fractal Antenna
Using Inset-Fed for 5G Applications
Abstract The objective of this paper is to propose a wide band Sierpinski carpet
rectangular microstrip fractal antenna fed with inset-fed for 5G applications. This
proposed antenna design consists of the third iteration of Sierpinski carpet fractal
on the rectangular patch and partial ground structure print on both sides of FR4
epoxy material with a dielectric constant of 4.4 and 0.4 mm thickness at 28 GHz.
The simulated result shows a wide impedance bandwidth of 9.16 GHz and 8.43 dB
gain. It can further be configured to an array of fractals for high gain and bandwidth,
frequency selective surface (FSS), and radar applications.
1 Introduction
In recent years, research is widely going on wideband, high gain, and miniaturized
antennas for wireless communications such as mobile communication systems and
IoT. The wireless systems are making smaller and smaller size (miniaturization),
multiusers without interrupt (due to wideband), which includes low weight, low cost,
high performance of antenna parameters such as gain and directivity, and preferred
easy to manufacturing.
Nowadays, wireless systems use microstrip antennas (MSA) for transmitting and
receiving the signals. It has two thin metallic plates printed on both sides of a dielectric
substrate. The main pros of MSA are low weight, low cost, and low scattering cross
section. But narrow bandwidth and low gain, low efficiency, suffering from surface
waves and low power handle are its main cons. Design of patch is mostly dependent
on operating frequency, type of material, and thickness of the substrate. It is not only
depending on the design of patch (patch) but also feed techniques. Feed techniques
are edge-fed, inset-fed, coaxial-fed, proximity coupling-fed, and aperture coupling-
fed [1–6]. Among all, inset-fed is easy to design and fabricate, but it has mainly two
drawbacks such as high cross polarization (in H-plane) and design complexity at
higher frequencies (the width of patch and feed line are almost equal). The inset-fed
design is mainly dependent on notch width (gap) and depth. The design of notch
formula is defined in [7, 19].
The designers of the antenna have been looking to come up with new ideas to
enhance the envelope for antennas, using a smaller volume with enhanced impedance
bandwidth and gain. One of the proposed methods for increasing bandwidth (or
miniaturization) is via the use of fractal geometry on the patch, which gives rise
to fractal antennas. The fractal is defined as self-similar design to maximize the
electrical path length for achieving wideband, Multiband, high impedance matching,
high gain and polarization agility [8–11]. Not only fractal gets wideband but also
polarization change of the antenna [12]. The other techniques to achieve wideband
are partial grounds (defected ground structures) with/without fractal geometry [13,
14] and optimized feed design [19].
The wireless systems for 5G applications need low traffic (i.e., wideband for mul-
titasking/users), high SNR, and high gain and the 5G antennas are mostly designed
above 27 GHz. The spectrum bands for 5G are proposed by the United States to
be studied for consideration at World Radio Communication Conference (WRC)-
19 includes 27.5–29.5 GHz, 37–40.5 GHz, 47.2–50.2 GHz, 50.4–52.6 GHz, and
59.3–71 GHz for mobile broadband in 2015. This paper presents the design of
dense dielectric patch for wideband with lower sidebands, single, and array of dense
dielectric patches with EBG on the ground plane. Results for the single patch are
in 27.1–29.5 GHz bandwidth and −12.9 dB SLL and the array patch results in
27.2–30.2 GHz bandwidth and −11.6 SLL [15]. A microstrip grid array antenna
on an FR4 substrate with a standard PCB technology for wideband (7.16 GHz) and
12.66 dBi gain [16]. Enhancement of SNR and Gain are with minimum mean square
error (MMSE) adaptive algorithms to 64 elements for 5G communications [17]. The
design of the fractal smart antenna is to get multiband for at least seven converged
5G wireless network services using edge-fed [18]. The multiband, wideband, high
gain and low side lobe levels (SLLs) with EBG structure on the ground, an array of
antennas, dense of dielectric (stacking of dielectric materials), the fractal structure
on the patch, and adaptive algorithms.
Based on the above analysis, the wide impedance bandwidth can be achieved with
good impedance matching. The proposed antenna design is based on fractal geometry
on the patch, partial ground structure, and optimized feed techniques. This paper is
going to propose a design of the fractal structure of Sierpinski carpet geometry on
Wide Band Sierpinski Carpet Rectangular Microstrip Fractal … 377
the rectangular patch and partial ground copper plates on both sides substrate of FR4
(4.4) material with using inset-fed to achieve wide impedance bandwidth for 5G
applications.
A patch and ground planes, and inset fed design parameters are determined by pre-
defined formulas [1, 19]. These designs are done at the 28 GHz operating frequency
with FR4 epoxy substrate material with 0.4 mm thickness. The conventional inset-
fed rectangular patch is given in Fig. 1. The measured dimensions are tabulated in
Table 1:
• The design of Sierpinski carpet fractal antenna, ground design for Wideband
and antenna structure:
Table 1 The dimensions of the patch, feed line, and ground for wideband
Height (H) Length (L) Width (W) Feed line Feed line Inset-fed Inset-fed
length at 50 width at 50 width (X0 ) depth (Y0 )
(L50 ) (W50 )
0.40 mm 2.39 mm 3.26 mm 2.94 mm 0.77 mm 0.16 mm 0.8 mm
378 B. A. Sheik et al.
and 1/3th of W from left side; these details are shown in Table 2 and the structure is
shown in Fig. 2, where Ls is length of the fractal patch.
Case-2: Second Iteration:
Second iteration dimensions (D2 L2 X W2 ) are equal to scaling factor 19 of Ls and
W of the patch; these details are shown in Table 2 and structure is shown Fig. 3.
Case-3: Third Iteration:
Third iteration dimensions (D3 L3 X W3 ) are equal to scaling factor 27
1
of Ls and
W of the patch; these details are shown in Table 2 and structure is shown Fig. 4.
• Ground design for Wideband:
It is partial ground and combination of two rectangles. The dimension of the first
rectangle is Lg1 X Wg1 and the second rectangle is Lg2 X Wg2 , respectively. Those
dimensions are tabulated in Table 2. This ground structure is shown in Fig. 5.
Wide Band Sierpinski Carpet Rectangular Microstrip Fractal … 379
Table 3 The simulated results of conventional and fractal geometry on patch with the partial ground
S. no Type f R (GHz) S11 (dB) VSWR G (dB) D (dB) BW
(GHz)
1 Conventional 28.46 −32.3 1.04 5.4 6.9 1.33
rectangle with
full ground
2 Conventional 30.34 −45.7 1.01 8.5 8.9 8.38
rectangle with
partial ground
3 First iteration 30.4 −47.2 1 8.4 9.0 8.09
4 Second iteration 31.10 −35.1 1.03 8.4 8.9 7.74
5 Third iteration 30.09 −41.2 1.01 8.43 8.9 9.16
tivity from 6.9 dB to 8.9 dB, and impedance bandwidth from 1.3 GHz to 9.16 GHz,
respectively. These improvements are achieved by increasing impedance matching
between the feed line and patch with decreasing the inductive and capacitive loads.
• The proposed structure of Wide Band Sierpinski Carpet Rectangular
Microstrip Fractal Antenna (Third Iteration):
a. The proposed antenna:
This proposed antenna is depicted in Fig. 6, which has a patch with fractal geom-
etry and defected ground structure.
Simulated results of the proposed antenna are shown below.
b. Return loss:
This checks performance of an antenna at different frequencies. This parameter
graph is depicted in Fig. 7. The return loss is minimum (−40 dB) at 30.9 GHz.
c. VSWR:
The impedance bandwidth of the proposed antenna is defined in Fig. 8, the
impedance of the proposed antenna is determined as 9.16 GHz (VSWR ≤ 2) from
Wide Band Sierpinski Carpet Rectangular Microstrip Fractal … 381
Fig. 6 The proposed structure of wide band sierpinski carpet rectangular microstrip fractal antenna
(third iteration)
25.51–34.67 GHz. It is shown in Fig. 6. The polar plot of the gain and directivity of
the proposed antenna are given in Figs. 9 and 10. The plot of Gain versus Frequency
of proposed antenna is given in Fig. 11.
d. Gain (dB):
See Fig. 9.
e. Directivity (dB):
See Fig. 10.
f. Gain versus Frequency:
See Fig. 11.
382 B. A. Sheik et al.
4 Conclusion
width of the proposed antenna is almost seven times increment of the conventional
rectangular patch. This proposed antenna is mainly suitable for 5G applications like
mobile communication, IoT, and Wi-Fi. The future scope of this paper is radar appli-
cations, reconfigurable for shifting wideband, and decrease cross polarization.
References
1. Balanis CA (1997) Antenna theory, analysis, and design. Wiley, New York
2. Garg R, Bhartia P, InderBahl, Ittipiboon A (2000) Microstip antenna design handbook. Artech
House
3. Carver KR, Mink JW (1981) Microstrip antenna technology. IEEE Trans Antennas Propag
29(1):2–24
384 B. A. Sheik et al.
1 Introduction
Several digital signal processing systems (DSP) need linear filters to adjust the
changes in the signals they process. Adaptive filters are one among them, which
are significantly used in DSP applications like software-defined radio [1], channel
equalization [2], noise cancelation. Adaptive filters can be designed as finite impulse
response (FIR) and infinite impulse response (IIR) filters. Adaptive FIR filter has
advantages than adaptive IIR filter because of their stability and easy update of
The basic LMS adaptive FIR filter architecture is as shown in Fig. 1. For every cycle,
the adaptive filter evaluates error value and the filter output. The obtained error value
is used for updating the filter coefficient.
ASIC Implementation of Low Power, Area Efficient Adaptive FIR … 387
where X (k) is the input signal at time ‘k’ and ‘T’ is the transpose of the vector. The
output signal Y (k) of an adaptive filter can be represented as
where e(k) is the error signal, µ is the step size parameter which determines the
convergence speed and accuracy of the filter. The error signal can be obtained by the
difference between the desired signal and the output signal. The equation is given as
where D(k) is the desired signal. The input signal of the filter X(k) is nourished into
the delay line and then shifted to the right for each sampling period.
388 G. Naga Jyothi and S. Sriadibhatla
2.1 Background of DA
where dk is fixed coefficient, xk is the input signal and K is the number of input
words. xk is a 2’s complement binary number scaled such that |xk | < 1, then xk and
−xk can be expressed as
L−1
xk = −bk0 + bkl 2−l (7)
l=1
K
L−1
y= dk [−bk0 + bk 2−l ] (8)
k=1 l=1
Equation (8) is the normal inner multiplication expression. By changing the sum-
mation order, finally we get equation as follows.
L−1 k
K
y= [ dk bkl ]2−l + dk (−bk0 )] (9)
l=1 k=1 k=1
Equation (9) gives computation of distributed arithmetic and the inner product is
computed by
N −1
y= [2−n Cn − C0 ] (10)
n=1
K −1
where Cn = k=0 [d(k)b(k)n ].
The DA-based adaptive FIR filter architecture [8] of filter length K = 4 is as shown in
Fig. 2. It consists of a four-point inner product block, weight updating block, control
unit block. Along with it, error calculation e(n) and sign magnitude controller units
are present.
ASIC Implementation of Low Power, Area Efficient Adaptive FIR … 389
The four-point inner product block consists of DA table, multiplexers, XOR gates,
and carry-save accumulator as shown in Fig. 3. The DA table has 16 registers, which
store 16 combinations of inner product values as shown in Fig. 5 and fed as input to
the 16:1 multiplexer. The feedback filter coefficients from the weight updating block
will act as selection lines to the multiplexer and the output from the multiplexer is
given to the carry-save accumulator unit. The output from MUX is given to the XOR
gate for sign control. If the MSB bit of MUX is zero, then normal addition operation
can be performed else 2’s complement addition can be performed. After ‘K’ clock
cycles, the carry-save accumulator gives sum and carry outputs. By using fast bit
390 G. Naga Jyothi and S. Sriadibhatla
clock cycle to carry-save adder accumulator, the throughput rate of the architecture
is increased. The generated sum and carry are added to obtain final filter output
y(k). The filter output y(k) is subsequently subtracted from the desired signal D(k)
to achieve error sample e(k).
The weight increment block consists of four barrel shifters, four adder/subtract
units and a word parallel bit serial converter as shown in Fig. 4. The multiplication
of the input xk with error e(k) can be performed by using barrel shifter and the
output from the barrel shifter is added/subtracted with current weights to get weights
updation. The updated weights are given as selection lines to the 16:1 multiplexer of
the four-point inner product unit.
In the basic DA-based adaptive FIR filter, the inner products are computed using DA
table with registers and adders. The inner product of DA table is shown in Fig. 5.
The input x(K + 1) with length L is fed to the register and achieves the output x(k).
Again, x(k) is given to another register to get x(k – 1). The input samples x(k + 1) and
x(k) are added and passed through the register to produce x(k) + x(k – 1). Similarly,
the registers and adders will generate the DA table outputs x(k – 2), x(k) + x(k –
2), x(k – 1) + x(k – 2)…, and so on and these 16 outputs are passed as inputs to the
16:1 multiplexer with filter coefficient as selection lines. The DA table outputs are
achieved by using 15 registers and 7 adders. These registers occupy more area and
consume more power. To reduce the area, we proposed a novel pipelined DA table
as shown in Fig. 6.
ASIC Implementation of Low Power, Area Efficient Adaptive FIR … 391
of 11 registers with increase of 3 adders. By using the proposed design, the area of
DA-based adaptive filter can be reduced.
5 Simulation Results
Table 1 Performance comparison of ASIC synthesis result for CMOS 90 nm technology for an
16-tap FIR Filter
Design Area ADP PDP MSP MSF Power
Filter [8] 18,257 94,936 32.38 5.20 192 6.2269
Filter [9] 71,195 127,439 60.76 1.79 558 33.944
Proposed 17,163 32,757 30.03 5 200 6.006
filter
Units Area Sq. µm, ADP Sq.µm × ns, PDP mW × ns, MSP ns, MSF MHz, Power mw
Fig. 7 Simulation results for 4-tap pipelined DA-based adaptive FIR filter
ASIC Implementation of Low Power, Area Efficient Adaptive FIR … 393
6 Conclusion
We have designed and implemented pipelined DA-based adaptive FIR filter for high
throughput, low power, and low area. The throughput for the proposed architecture
is increased by using fast clock pulse to the carry-save adder of the accumulator.
The reduction in the number of delays of the pipelined architecture of DA table has
reduced the area. Weight updation and LUT updation are performed in parallel. The
proposed architecture has 14% less in power and 30% less in area when compared
with the basic DA-based adaptive filters.
References
1. Hentschel MH, Fettweis G (1999) The digital front—end of software radio terminals. IEEE
Personal Commun Mag 6(4):40–46
2. Vaidyanathan PP (1993) Multirate systems and filter banks. Prentice Hall, Englewood Cliffs,
NJ
3. Croisier A, Esteban DJ, Levilion ME, Rizo V (1973) Digital Filter for PCM Encoded Signals.
U.S. Patent 3,777,130, 4 Dec 1973
4. Zohar S (1973) New hardware realization of non recursive digital filters. IEEE Trans Comput
C-22:328–338
5. White SA (1989) Applications of distributed arithmetic to digital signal processing : a tutorial
review. IEEE ASSP Mag 6(3):419
6. Alled DJ, Yoo H, Krishnan V (2005) LMS adaptive filters using distributed arithmetic for high
throughput. IEEE Trans Circuit syst 52(7):1327–1337
7. Guo R, Debrunner LS (2011) Two high-performance adaptive filter implementation schemes
using distributed arithmetic. IEEE Trans Circuits syst II Exp. briefs 58(9):600–604
8. Meher PK, Park SY (2011) High-throughput pipelined realization of adaptive FIR filter based
on distributed arithmetic. In: Proeedings of 2011 IEEE/IFIP 19th International Conference on
VLSI, System-on-Chip, (VLSI-SOC11). Oct 2011, pp 428–433
394 G. Naga Jyothi and S. Sriadibhatla
Abstract Over the last few years, there has been a rapid increase in the number of
people who are using high data rate services and applications. To be able to support
the ever-increasing data rates to the end users, the time has come to consider other
technologies which use the upper parts of electromagnetic spectrum other than the
Radio Frequency (RF) spectrum. One such technology which can be used as an
alternative to RF is Free-Space optical (FSO) communications. It provides higher
bandwidth and improved Bit Error Rate (BER) performance because of negligible
multipath and fading effect. In this article, the performance of an RF system and an
FSO system using BPSK-subcarrier intensity modulation (BPSK-SIM) in a weak
turbulent atmosphere are compared. It is found that from simulation results, FSO
system has a performance gain of 62% over RF for achieving a BER of 10−5 .
1 Introduction
Recent high demands of services and increase in the wireless data usage have led to
two major issues, namely spectrum congestion and last mile access bottleneck. As
the number of users continues to increase, the volume of the data traffic carried on
the wireless networks will be increased at an unprecedented speed. To meet the vari-
ous high data rate applications, RF-based wireless communications have limitations
on scalability and bandwidth [1]. To overcome this problem, one of the technolo-
gies proposed in the literature is optical wireless communications (OWC). It is a
cost-effective as well as high bandwidth providing a technique and it can be used
for both commercials as well as military applications. OWC can be classified into
three major categories based on their frequency range of operation, (i) near-infrared
band (of wavelength 750–1550 nm) known as FSO. (ii) Visible band (of wavelength
390–750 nm) known as visible light communications (VLC). (iii) Ultraviolet band
(of wavelength > 1550 nm) known as ultraviolet communications (UVCs) [2]. FSO
communications can support many high data rate applications when compared to RF
communications as it is having the very high optical bandwidth. FSO has several
advantages when compared to traditional RF communications. FSO installation can
be made faster as it is does not require digging to lay the fiber. Other advantages are
low cost, license-free spectrum, interference-free, and more secure when compared
to RF communications [3]. FSO communication uses lasers or light-emitting diodes
(LEDs) to optically transmit the data through the atmosphere. Line-of-sight (LOS)
path is required for FSO systems in order to transmit data successfully, which is the
major limitation when compared to RF system. FSO link performance will not suffer
from multipath fading effects but highly influenced by atmospheric conditions. The
main challenges of FSO communications through the atmosphere are attenuation of
optical power and attenuation of the received optical signal. The received optical
signals are affected due to rain, fog, haze, and snow. Even under clear weather con-
ditions, atmospheric parameters cause variations in the refractive index leading to
deterioration in the received optical signal quality. This can cause the quality of the
received signal (i.e., magnitude and phase) to fluctuate rapidly at the receiver end
[4]. This rapid fluctuation is known as scintillations. FSO communication system
performance degrades as scintillations increase error probability of FSO link. There-
fore, by choosing a proper modulation scheme, the performance of an FSO system
can be improved. The most widely used and simple modulation scheme for optical
communications is the on-off keying (OOK) technique [3], but OOK is prone to the
turbulence induced fluctuations of the received optical signal. The BPSK modulation
scheme gives better performance when compared to OOK as the phase of the BPSK
signal carries the data [5]. Thus, in order to achieve a particular BER value, FSO
system using BPSK-SIM requires lower signal-to-noise ratio (SNR) compared to the
OOK scheme for the given atmospheric conditions. The remaining sections of the
paper are as follows: the system model is explained in Sect. 2. The simulation results
are presented in Sect. 3. And finally, conclusions are presented in Sect. 4.
2 System Model
The block diagram of the BPSK-SIM FSO link is shown in Fig. 1 and it has been
explained in detail in the following subsections.
FSO-Based 5G Mobile Cellular Systems for Urban Canyons 397
Output
data
Input data LASER Photo BPSK
BPSK Driver Channel Detector Demodulator
Modulator
where P is the photodetector responsivity, x(t) is the subcarrier signal, and z(t) is
modeled as AWGN with mean zero and variance σn2 . α is the optical modulation
index, and to avoid over-modulation |α x(t)| ≤ 1. A coherent demodulator is used in
order to recover the input data.
Several channel models have been proposed in the literature to characterize the
distribution of turbulence induced fading in FSO systems. Log-normal turbulence
model has been used most widely under weak turbulence conditions because it is
mathematically convenient and tractable. The link length, the wavelength of the
optical radiation, and the refractive index structure parameter Cn2 characterizes the
fading strength of the turbulence channel. Log-normal model is characterized by
Rytov variance σl2 . The log-normal model is valid for σl2 < 1.2 and this is the major
limitation. The Rytov variance σl2 in terms of the propagation distance “L” and wave
number “k” is given as [6]
√6
σl2 1.23Cn2 k 7 L11 (2)
398 V. K. Mogadala and G. Sasibhushana Rao
The performance of the system is estimated in terms of BER metric under different
channel conditions by considering effects of scintillation and noise. The theoretical
unconditional BER per subcarrier channel is given by [7, 8]
∞
1 1 (ln(Y /Yo ) + σl2 /2)2
Pe Q( γ (Y )) √ exp − (4)
2πσl Y 2σl2
0
3 Results
The FSO system model under consideration is simulated with the specifications listed
in Table 1 by using Monte Carlo approach.
The BER performance of an FSO system under a weak turbulence channel for
various values of log-intensity variance σl2 {0, 0.1, 0.3, 0.5} is compared with an
RF system under Rayleigh and Rician channel by using BPSK modulation scheme
is shown in Fig. 2.
From the simulation results, it can be inferred that the performance of an FSO
system under weak turbulence is better when compared to an RF system performance
under the Rician channel. The performance of an FSO system degrades as the atmo-
spheric turbulence variance value increases. To achieve a BER of 10−5 , FSO system
Fig. 2 BER performance comparison of an RF system versus FSO system against SNR
without any turbulence effect requires only ~15 dB SNR, whereas an RF system
under Rician and Rayleigh channels requires ~28 dB and ~40 dB SNR, respectively.
However, the SNR required in achieving a particular BER value increases as the
atmospheric turbulence strength increases.
4 Conclusions
References
1 Introduction
A(k) a(n)
Remove a(t)
Signal Cyclic r(t)
S(k) P/S De- FFT ADC
Prefix
mappe and S/P
r
2 Frequency Offset
Magnitude
Frequency
Carrier Frequency Offset
(1) Fractional CFO, which introduces ICI and degrades the performance of Bit
Error Rate (BER) and
(2) Integer CFO, which introduces the cyclic shift of data subcarriers and phase
change.
The OFDM transmitter and receiver with CFO equations can be determined as
At transmitter
1
k−1
j2π xm
b(n) B(m)e X (1)
X m0
At receiver
j2π xε
a(n) b(n)e X + w(n) (2)
where k = 0, 2, …, N − 1.
Due to the carrier frequency offset, the Bit Error Rate (BER) performance degrades
as the signal-to-noise ratio (SNR) increases for the selective range of CFO [9, 10].
3 Results
This section illustrates the behavior of OFDM with different CFO ranges in terms
of BER versus SNR for various modulations such as BPSK, QPSK, 8PSK, 16PSK,
32PSK, 64PSK, and 16QAM. All simulations are performed by using MATLAB
software. The offset ranges from 0 to 0.2 with an interval of 0.05 under Gaussian
channel. The simulated results are compared with each modulation techniques for
the above-selected range of CFOs. It is observed that BPSK modulation technique
is having low BER for null offset when compared to other modulation techniques.
The BER results for various modulations versus. SNR in dB under different ranges
of CFOs are plotted in Fig. 3 and tabulated in Table 1.
The BER performance of CFO OFDM system gradually decreases as SNR
increases, whereas at fixed SNR, BER increases with CFO as shown in Table 1.
The BPSK modulation is having least BER when compared to the 16QAM for
CFO “0” and BER is increasing for an increase in CFOs. So, it is observed that BPSK
modulation technique is having low BER for null offset than other modulations.
404 C. Vijay et al.
0
10
-1
10
-2
10
BER
-3
10
CFO 0
CFO 0.05
-4
10 CFO 0.1
CFO 0.15
CFO 0.2
-5
10
-6
10
0 1 2 3 4 5 6 7 8 9 10
SNR
(a) BER for BPSK modulation
0
10
-1
10
BER
CFO 0
-2 CFO 0.05
10
CFO 0.1
CFO 0.15
CFO 0.2
-3
10
0 1 2 3 4 5 6 7 8 9 10
SNR
(b) BER for QPSK modulation
0
10
BER
-1
10
CFO 0
CFO 0.05
CFO 0.1
CFO 0.15
CFO 0.2
-2
10
0 1 2 3 4 5 6 7 8 9 10
SNR
(c) BER for 8PSK modulation
Fig. 3 Performance of BER with SNR for CFO range from 0 to 0.2 by applying various modulation
techniques a BER for BPSK modulation, b BER for QPSK modulation, c BER for 8PSK modulation,
d BER for16PSK modulation, e BER for 32PSK modulation, f BER for 64PSK modulation, g BER
for 16QAM modulation
Carrier Frequency Offset Impact on LTE-OFDM Systems 405
-0.1
10
-0.2
10
BER
CFO 0
CFO 0.2
-0.4
10
0 1 2 3 4 5 6 7 8 9 10
SNR
(d)BER for16PSK modulation
-0.03
10
-0.06
10
-0.09
10
BER
CFO 0
CFO 0.05
CFO 0.2
-0.15
10
-0.18
10 0 1 2 3 4 5 6 7 8 9 10
SNR
(e) BER for 32PSK modulation
-0.01
10
-0.02
10
-0.03
10
-0.04
10
BER
-0.05
10 CFO 0
CFO 0.05
-0.06
10 CFO 0.1
CFO 0.15
10
-0.08
10
0 1 2 3 4 5 6 7 8 9 10
SNR
(f) BER for 64PSK modulation
Fig. 3 (continued)
406 C. Vijay et al.
-0.08
10
CFO 0
CFO 0.05
-0.09 CFO 0.1
10 CFO 0.15
CFO 0.2
-0.1
BER
10
-0.11
10
-0.12
10
0 1 2 3 4 5 6 7 8 9 10
SNR
(g) BER for 16QAM modulation
Fig. 3 (continued)
Table 1 CFO OFDM system performance in BER versus SNR for different modulations
CFO SNR
0 2 4 6 8 10
(a) BER for BPSK modulation
0 0.078644 0.037438 0.012452 0.002327 0.000198 4.69E-06
0.05 0.084684 0.042579 0.015614 0.003599 0.00044 2.03E-05
0.1 0.101945 0.057871 0.026447 0.009198 0.002299 0.000369
0.15 0.131541 0.085102 0.048823 0.024485 0.010701 0.004013
0.2 0.173363 0.12615 0.085555 0.054676 0.032775 0.019105
(b) BER for QPSK modulation
0 0.291712 0.197715 0.109498 0.045531 0.011942 1.52E-03
0.05 0.309956 0.21954 0.13418 0.067938 0.026476 7.34E-03
0.1 0.360026 0.282652 0.209367 0.145056 0.092691 0.054682
0.15 0.433372 0.377812 0.326538 0.278971 0.237977 0.203622
0.2 0.517500 0.487174 0.459823 0.439834 0.423465 0.413179
(c) BER for 8PSK modulation
0 0.577309 0.490926 0.389978 0.279327 0.17411 0.087213
0.05 0.593938 0.51873 0.432836 0.343744 0.261075 0.18912
0.1 0.641323 0.589317 0.538837 0.496718 0.464486 0.443959
0.15 0.701901 0.676556 0.659909 0.653526 0.660193 0.67378
0.2 0.760627 0.754763 0.75783 0.76987 0.78765 0.805837
(continued)
Carrier Frequency Offset Impact on LTE-OFDM Systems 407
Table 1 (continued)
CFO SNR
0 2 4 6 8 10
(d) BER for 16PSK modulation
0 0.776788 0.726357 0.661444 0.581534 0.487653 0.382983
0.05 0.786784 0.744514 0.691984 0.634854 0.576304 0.524866
0.1 0.815534 0.788336 0.762869 0.744088 0.737901 0.746335
0.15 0.848979 0.838316 0.833515 0.838197 0.851133 0.870782
0.2 0.881541 0.881165 0.885948 0.896666 0.910405 0.925125
(e) BER for 32PSK modulation
0 0.886959 0.86079 0.82542 0.781804 0.727747 0.660688
0.05 0.892352 0.869707 0.84288 0.811607 0.780671 0.754652
0.1 0.906378 0.893055 0.88023 0.871598 0.870809 0.879839
0.15 0.92483 0.918941 0.91772 0.921059 0.929513 0.941159
0.2 0.940813 0.941132 0.943667 0.949845 0.957452 0.965181
(f) BER for 64PSK modulation
0 0.943425 0.929761 0.91226 0.890116 0.861768 0.82628
0.05 0.945672 0.934871 0.920863 0.905502 0.889098 0.875956
0.1 0.95322 0.946372 0.940327 0.936477 0.935768 0.940466
0.15 0.961951 0.959393 0.959005 0.960477 0.96483 0.971102
0.2 0.970549 0.97053 0.971931 0.975017 0.97901 0.98292
(g) BER for 16QAM modulation
0 0.74922 0.749587 0.74987 0.74983 0.750437 0.750481
0.05 0.74883 0.748692 0.749475 0.748851 0.749652 0.748608
0.1 0.753488 0.751015 0.749188 0.748386 0.747768 0.749256
0.15 0.782832 0.775532 0.771145 0.765766 0.76428 0.762372
0.2 0.845952 0.843123 0.841034 0.839796 0.838629 0.83769
4 Conclusions
References
1 Introduction
Technology brings image content mining, one of the best techniques compared to text
extraction. In image mining, contents are extracted effectively so that the efficiency
of this technique is improved highly. Image extractions are one of the challenging
tasks for many researchers, the creation of this data sets are very easy, and extraction
is really a complex one. An image file is a rich collection of information and it
contains a variety of sources, among this the extraction is really a challenge to the
user community [1]. Another important problem here extraction of image source
user need background knowledge, information is arranged properly else searching
time for this proposed increased [2, 3]. These image data sets used today everywhere
helps in many application and depends on the users need. Among this, image data
sets needs to be arranged properly else user never gets proper information [4]. Variety
of technique are available for this, today technology brings clustering as one of the
efficient techniques for arranging this type of data sets. Information extraction is
done either content-based or query based. Most of the image extraction works on
content-based retrieval, query image, and retrieval images are mostly same, so that
user gets exact information. Because of these varieties of application, today, this
technology brings more attention to the researcher’s community. Recent technology
and research works are highly motivated in this domain, past decade’s most of the
research works are highly motivated in this area. Image extraction is done either
image pixel, the difference between one image frames with other, time taken or time
difference between one frame to other, information available on the frame and more
information are considered for finding the difference between one frame to other [5].
All of the above, image mining is one of the thruster areas for many researchers
today. Many research works are carried on this domain because of its complex nature
none of the technique ever brings effective results [6]. The existing technique works
well in the specific type of video file, and also this technique retrieval process took
a very long time. All these points devoted a new idea that focused on this research
paper. The main challenge for video extraction, the dynamic video is converted into
a static image file. The same type of video files never produce same frame values,
it depends on the camera, time, user, and more. Unprofessional user’s video gives a
lot of noise and un-relevant information never brings the exact information. So, the
extraction of needed information consisting of the above factor is a must.
2 Existing System
Many existing image mining techniques are focused on image property such as
pixel, a time difference between one shot to another shot, the content available on
the frame, and more. Based on this image property, only frames are differentiated
[7]. But experimental outcome verifies that none of the technique works well for all
types of video files. Based on the video frame rates are variable, most of the technique
works well for the particular type of video file only [8].
3 Proposed System
4 Experimental Setup
A video is a particular media that embeds visual, motion, audio, and textual infor-
mation. Having given this huge amount of information, it is necessary to image
frames are arranged properly to avoid time delay for image extraction. This process
is carried out in various steps. One of the important steps is clustering. Clustering
is the technique that combines the relevant information’s in one, and un-relevant are
combined separately. As a result, all the objects of similar properties are grouped
into one class. The similarity is obtained from the attribute values of the objects. The
basic attributes are distance, pixel value, and other common factors if any. Clustering
algorithms are classified based on their method of classification.
See Fig. 1.
Video files are divided into the number of segments, among this group certain frame
are unwanted for our retrieval process, and some segments image feature values are
missing and user first eliminates this type of segmentation. Due to this elimination
searching, time reduced storage for storing and this segmentation also improved.
In video retrieval process, it is performed in two ways, one client-side operation
412 D. Saravanan and D. Joseph
and other is server-side operations. Both operations have unwanted segments that
should be removed first. After duplicate frames are removed, remaining frames are
grouped together and those segment values are extracted using image histogram
technique, and this value is used for image retrieval process. Removing of unwanted
image process is shown below. In this process, we used image pixel color value,
a difference between two shots pixel and the values are taken for eliminating the
duplicate frame. After finding the duplicate image, it is removed from the dataset.
Reaming frames are considered for further processing. This frame selection is based
on input video files, approximately 4 minutes of video files are considered. For this
video files, the above process is performed and input is selected. This process is
shown in Figs. 2, 3, 4, 5, 6, 7 and Tables 2, 3, 4, 5, 6, 7, 8 as per proposed architecture
is shown in Fig. 1. Cartoon Video files. Number of Cluster versus Time Taken is
shown in Table 1.
Image Data Extraction Using Image Similarities 413
1000
800
600
0 20 40 60 80 100 120 140 160 180
Fig. 3 Cartoon graph frame count versus time taken for cartoon video file
1000
500
Cricket
0
25 50 75 100 125 150
Fig. 4 Performance graph frame count versus time taken for cartoon video file
way. Most of the hierarchical clustering algorithms works is based on the tree struc-
ture to represent the data set. In the tree structure, root node considers the dataset, i.e.,
input set leaf nodes are different data points available in the dataset. Here, differences
between the data points are considered to form the clusters and subclusters. If the
data points are having the same value, they are all represented at the same level if
the value is more, they are represented in the next level, such a way tree structure
are formed. Based on the distance, the quality of the cluster is formed, and in the
same level, cluster values or data points are very closer [9]. Hierarchical clustering
performed its operation in two different ways; in Agglomerative technique, it formed
the cluster in single search among the data set. So that in given data sets, all the values
form in a single cluster [10]. This operation is performed in various steps and in final,
all small groups are combined together and we get one single group of clusters in
second technique just opposite of the first. The process starts with single data sets it
keeps on divide the entire groups into smaller and smaller until getting the fruitful
result. Here information is divided and finally get the resultant cluster. But in case
of divisive clustering, it is opposite to the early process. Compared with existing
clustering algorithm hierarchical cluster produce good results, and it works well for
video data files and multimedia files. This algorithm allows the user for backtracking
and it helps the user to retrieve their problems quickly.
The procedure helps to find the duplicate frame among the frame available in the
data sets. The main operation of this procedure are as follows.
1. Image element drilling: Given video files are divided into frames, each frames
image feature is extracted and stored in the database for further image processing.
2. Creating of image identifier: Compare one frame with other, identify the differ-
ence between the frames. The process continues available frames. Higher value
difference frames are extracted and stored separately [11]. In image mining, this
process is called preprocessing step, and it helps remove unwanted frames in the
given data set.
3. Apply data mining algorithm to produce an object.
5 Experimental Outcomes
Cyan- cartoon, Brown- Graphics, Rosy brown- News, Red- Movie, Green- Natural
(see Tables 1, 2, 3, 4, 5, 6, 7, 8 and Figs. 2, 3, 4, 5, 6, 7 and 8).
Image Data Extraction Using Image Similarities 417
6 Conclusions
Existing clustering techniques have some difficulties and it never produces a good
result in the case of complex data sets, especially image data. This forces us to bring
about a new technique for video data retrieval. In this paper, a new framework for
image retrieval in an unconstrained video is proposed. Our system utilizes four major
techniques for efficient image retrieval. Initially, the user gives the sample video as
input. The received video is then converted into frames. The noise removal process
is conducted, after which redundant frames are removed using the RGB value. After
completing the frame extraction process, hierarchical-based clustering is utilized for
clustering the images. It has been verified experimentally that the proposed technique
works well for all types of video files.
Future Enhancement
Multimedia is the combination of audio, video, text, animation, sound, and motion.
In the proposed technique, video is taken as input for the image retrieval process. In
future, the technique may be combined with some other multimedia content.
420 D. Saravanan and D. Joseph
References
1. Panda et al (2016) Hybrid data mining approach for image segmentation based classification.
Int J Rough Sets Data Anal 3.2(2016):65–81
2. Algur et.al (2016) Web video object mining: a novel approach for knowledge discovery. Int J
Intell Syst Appl 8(4):67–75
3. Saravanan D Information retrieval using: hierarchical clustering algorithm. Int J Pharm Technol
8(4):22793–22803
4. Saravanan D Effective video data retrieval using image key frame selection. Advanc Intell Syst
Comput 145–155
5. Saravanan D (2016) Video content retrieval using image feature selection. J Biotechnol 13(3),
215–219
6. Bhilasha Y et al (2016) An efficient video data security mechanism based on RP_AES. Int J
Advanc Technol Eng Explorat 3(16):36–42
7. Yang Y, Nie F, Xu D, Luo J, Zhuang Y, Pan Y (2012) A multimedia retrieval framework based
on semi-supervised ranking and relevance feedback. IEEE Trans Patt Anal Mach Intell 34(5)
723–742
8. Saravanan D (2016) Design and implementation of feature matching procedure for video frame
retrieval. Int J Control Theor Appl 9(7):3283–3293
9. Zhang T, Ramakrishnan R, Livny M (1996) Brich an efficient data clustering method for very
large databases. In: Proceedings of the ACM SIGMOD conference on management of data.
Montreal, Canada, pp 103–114,
10. Ester M, Kriegel H-P, Sander J, Xu X (1996) A density-based algorithm for discovering clusters
in Large spatial database with noise. In: I&‘1 conference on knowledge discovery in databases
and data mining (KDD-96), Portland, Oregon, August 1996
11. Saravanan D (2016) Segment based indexing technique for data file. Proc Comput Sci
87(2016):12–17
Local Ternary Pattern Alphabet Shape
Features for Stone Texture Classification
1 Introduction
Textures are majorly classified into regular and irregular textures [1, 2]. Regular
textures are composed of structurally repeated similar patterns with a certain orga-
nized existence. The morphological shape component consists of a set of patterns or
structuring elements. A pattern represents a shape and study of shape patterns over
textures is considered to be a useful step to characterize textures and their recog-
nition. Various approaches exist in the literature to investigate the structural and
textural features of an image and its data spatially. These measures include fractal
dimension [3], Fourier analysis [4], local variance measures [5], and variograms [6].
Fourier analysis is considered to be the most suitable for dealing with the existence
of regular/similar patterns within an image. It is useful in filtering out speckles in
radar data and to exclude the effect of repeated patterns in agricultural image data [7].
Study of patterns based on the fundamental characteristic of local variance is another
important area of research in the characterization and classification of textures [8].
That is why the present study investigates how the frequency occurrences of various
texture primitive patterns, which are nothing but shape components classify stone
textures.
Various pattern-based approaches are proposed in the literature to characterize and
classify the textures, which include methods based on movement along edge [9], long
linear patterns [10], images that are preprocessed [11], description of marble texture
[12], etc. Characterization and classification of texture images can be performed
using wavelet transforms techniques and its variants, which are based on statistical
features and primitive patterns [13–15].
The method proposed by Sasi et al. [16] using wavelets classified stone textures
into four categories and achieved a grouping accuracy of 94.56%. Ravi Babu et al.
[17] and Sumalatha et al. [18] proposed methods to classify stone textures using
pattern-based approaches on gray level preprocessed images and achieved good clas-
sification accuracy and classified stone texture into four groups. The method based
on Textons by Sujatha et al. [19] also classified the textures into four groups with
good classification accuracy.
In most of the approaches for stone image classification, researchers tried to
analyze and portray textures based on overlapping subpatterns. The proposed method
proposes a pattern-based approach to group the stone textures into one of the four
classes using the occurrence of alphabet texture element patterns over 3 × 3 sub-
image.
The remaining part of the paper is structured as follows: The second section
explains the process of extraction of Alphabet Shape patterns to Classify Stone
Local Ternary Pattern Alphabet Shape Features for Stone … 423
Texture images using texture fundamentals on the gray level image. Grouping the
stone texture based on the user-defined classification algorithm derived from texture
pattern count and analysis of the results are explained in section three. Section four
provides a comparison of the present method with other existing methods and finally,
the conclusion included in section five.
Fig. 1 Representation of
neighborhood over the 3 × 3
window P1 P2 P3
P4 P5 P6
P7 P8 P9
424 P. S. V. V. S. R. Kumar et al.
converted ternary values which form a Texture Element Pattern (TEP). The mean
value M in Fig. 2a is 48.
the algorithm. The present algorithm works irrespective of the size of the texture,
i.e., size invariant property is adopted in the proposed method.
428 P. S. V. V. S. R. Kumar et al.
Fig. 7 Distribution of TPC on the sample texture images based on proposed method
The proposed method collected 1400 images from different databases. In that,
600 images are used for generating the feature set using Algorithm 1 and remaining
800 stone images are used for testing the proposed algorithm and the classification
results of each category are listed in Table 2 and the graph showing the classification
accuracy is revealed in Fig. 8. The average classification accuracy of stone textures
based on the proposed algorithm is 95.91%.
Local Ternary Pattern Alphabet Shape Features for Stone … 429
5 Conclusions
This study provided a new direction for classification of stone texture based on
features over texture elements derived from alphabet shapes and their pattern count.
Based on the results generated using analysis on basic texture elements and their
430 P. S. V. V. S. R. Kumar et al.
Table 3 Comparison table showing classification accuracy of proposed and other similar existing
methods with respect to different texture databases
Database Syntactic pattern Wavelet-based Texton feature Proposed method
on 3D method histogram on text detection
on patterns
Google 93.32 93.56 94.86 95.31
Paul Bourke 93.05 93.05 95.23 96.37
Mayang 92.83 92.95 94.39 96.79
VisTex 93.15 92.87 95.46 96.13
Average 93.09 93.11 94.99 96.15
Fig. 9 Comparison graph of classification accuracy of the proposed and other methods
patterns, the present study concludes that Alphabet Texture Element Patterns are
also suitable for stone texture classification. The present study adopted size and
rotation invariant concept and derives a user-defined algorithm for stone texture
classification. The proposed approach is simple and having less time complexity as
the computations involved are simple.
References
1. Haralick RM (1979) Statistical and structural approaches to texture, In: Proceedings of 4th
international joint conference pattern recognition, vol 67, pp 45–60
2. Van Gool L, Dewaele P, Oosterlinck A (1985) Survey-texture analysis. Comput Vis Graph
Image Process 29:336–357
3. Burrough PA (1983) Multiscale sources of spatial variation in soil, the application of fractal
concepts to nested levels of soil variation. J Soil Sci 34:577–597
4. Moody A, Johnson DM (2001) Land-surface phenologies from AVHRR using the discrete
Fourier transform. Remote Sens Environ 75:305–323
Local Ternary Pattern Alphabet Shape Features for Stone … 431
5. Woodcock CE, Strahler AH (1987) The factor of scale in remote sensing. Remote Sens Environ
21:311–332
6. Woodcock CE, Strahler AH, Jupp DLB (1988) The use of variograms in remote sensing II:
real digital images. Remote Sens Environ 25:349–379
7. McCloy KR (2002) Analysis and removal of the effects of crop management practices in
remotely sensed images of agricultural fields. Int J Remote Sens 23:403–416
8. Bocher Peder Klith, McCloy Keith R (2006) The fundamentals of average local variance:
detecting regular patterns. IEEE Trans Image Process 15:300–310
9. Eswara Reddy B, Nagaraja Rao A, Suresh A, Vijaya Kumar V (2007) Texture classification by
simple patterns on edge direction movements. IJCSNS 7(11):220–225
10. Vijaya Kumar V, Eswara Reddy B, Raju USN, Chandra Sekharan K (2007) An innovative
technique of texture classification and comparison based on long linear patterns. J Comput Sci
3(8): 633–638
11. Vijaya Kumar, V., Eswara Reddy, B. and Raju, U.S.N. A measure of patterns trends on various
types of preprocessed images, IJCSNS 7(8):253–257
12. Suresh A, Raju USN, NagarajaRao A, Vijaya Kumar V (2008) An innovative technique of mar-
ble texture description based on grain components. Int J Comput Sci Netw Secur 8(2):122–126
13. Raju USN, Vijaya Kumar V, Suresh A, Radhika Mani M (2008) Texture description using
different wavelet transforms based on statistical parameters. In: Proceedings of the 2nd WSEAS
international symposium on wavelets theory & applications in applied mathematics, signal
processing & modern science, Istanbul, Turkey, pp 174–178
14. Pullela SVVSR, Kumar et al (2017) Alphabet pattern approach for color fabric texture clas-
sification. In: IEEE—international conference on computer communication and informatics
(ICCCI-2017), Coimbatore, India
15. Pullela SVVSR Kumar et al (2016) Texture primitive unit extraction using different wavelet
transforms for texture classification. iCATccT, SJBIT, Karnataka, during 21–23 July 2016
16. SasiKiran J, Ravi Babu U, Kumar VV (2013) Wavelet based Histogram method for classification
of textures. IJECT, 4(3):149–164
17. Ravi Babu U, Kiran Kumar Reddy P, Eswara Reddy B (2013) Texture classification based on
Texton patterns using various grey to grey level preprocessing methods. Int J Signal Image
Process Pattern Recogn 6(4)
18. Sumalatha L, Sujatha B (2013) A new approach for recognition of mosaic textures by LBP
based on RGB model. Signal Image Process Int J (SIPIJ) 4(1)
19. Sujatha B, Sekhar Reddy C, Kiran Kumar Reddy P (2013) Texture classification using texton
co-occurrence matrix derived from texture orientation. Int J Soft Comput Eng (IJSCE) 2(6),
ISSN 2231-2307
20. Suresh A, Vijaya Kumar V (2010) Pattern Based classification of stone textures on a cubical
mask. Int J Univ Comput Sci 1(1):4–9
21. Babu UR, Kumar VV, Sujatha B (2012) Texture classification based on texton features. Image
Graph Signal Process 4(8):36–42
Modified Automatic Digital Modulation
Recognizer for Software Defined Radio
1 Introduction
The communication technology has traveled a huge leap from Hardware Radio (HR)
to SDR/CR. In an effort to provide a wide variety of increasing demands of market
namely data rate, effective bandwidth utilization, Quality of Service (QoS), and
2 Related Work
Azzouz and Nandi [5] proposed the feature-based ADMR, which used decision-
theoretic approach. ASK2, ASK4, PSK2, PSK4, FSK2, and FSK4 were considered
for recognition. The success rate was above 90% under Signal-to-Noise Ratio (SNR)
of 10 dB. Further Zhu and Nandi [1], Azzouz and Nandi [2, 6] gives elaborative work
on AMR including recognition of analog and digital modulations, complexities, com-
putational overhead, trade of known, and unknown parameters such as carrier fre-
quency, data rate, spectrum symmetry, to a name a few; required for recognition with
respect to an aforementioned algorithm. Zhu and Nandi [1] and Dobre et al. [7] along
with feature-based classification, has discussed Likelihood-based algorithms such
as Average Likelihood Ratio Test (ALRT), General Likelihood Ratio Test (GLRT),
Hybrid Likelihood Ratio Test (HLRT), and Wavelet-based recognition. Yu [8] in his
dissertation submitted to New Jersey Institute of Technology has elaborated upon
the features adopted by Azzoz and Nandi. Zaihe Yu has provided the experimental
results providing appropriate comments about the features and threshold fixation.
3 Proposed Algorithm
The proposed algorithm is similar to [2, 5, 8, 9]. Random symbols are generated from
a source and then, one of the modulations is chosen from a contenders list. Frames
are created with a suitable number of symbols Ns. Each frame is passed through a
low-pass filter with known signal strength, in other words adding Additive White
Modified Automatic Digital Modulation Recognizer … 435
Gaussian Noise (AWGN) to signal. Analytical signal as shown by Eq. (1) is built
for each frame by applying Hilbert Transform (HT) for each frame. Through which
Instantaneous Amplitude and Phase (IAP) are obtained. Instantaneous frequency is
derived by unwrapping the phase component. The experimentation flow is shown in
Fig. 1.
where x(t) is intercepted signal, f(t) is analytic signal of signal x(t), and y(t) is
imaginary part of f(t) produced by taking HT of x(t).
Band-pass filter is used to band limit the received signal is zero-phase filter.
Butterworth filter is also used to do the same. Hence, linear component of the phase
is assumed to be zero.
436 S. S. Mathad and C. Vijay
Features considered for decision are γmax , σap , σdp , σaa , and σa f [5] are defined
as follows:
i. The maximum value of spectral power density γmax .
The maximum value of the power spectral density of the normalized centered instan-
taneous amplitude of the intercepted signal is γmax . Normalization of the amplitude
is necessary in order to compensate for the channel gain. The feature is derived as
where Acn (i) is the value of normalized centered instantaneous amplitude at time
instants t i/fs (i 1, 2, … N) and defined as below
An (i)
Acn (i) (3)
ma
1
N
ma A(i) (4)
N i1
ii. Standard Deviation (SD) of the absolute value of the nonlinear component
of the instantaneous phase σap .
σap is the standard deviation of the absolute value of nonlinear component of the
instantaneous phase; evaluated over the non-weak signals from the intercepted signal.
As mentioned in [8], any instantaneous amplitude An (i) above 0.95 are taken as
non-weak samples at . The same threshold for instantaneous amplitude is used in
experiments conducted. σap is defined by
⎛ ⎞ ⎛ ⎞2
1 1
σap ⎝ φ 2 (i)⎠ − ⎝ |φ N L (i)|⎠ (5)
C A >a N L C A >a
n(i) t n(i) t
where φ N L (i) is the value of nonlinear component of the instantaneous phase at time
instants t 1/fs, C number of samples in {φ(i)} for non-weak samples.
iii. SD of nonlinear component of the direct (not absolute) instantaneous phase
signal evaluated over non-weak samples σdp .
σdp is defined as
⎛ ⎞ ⎛ ⎞2
1 1
σdp ⎝ φ N2 L (i)⎠ − ⎝ φ N L (i)⎠ (6)
C A (i)>a C An (i)>at
n t
Modified Automatic Digital Modulation Recognizer … 437
σa f is defined as
⎛ ⎞ ⎛ ⎞2
1 2 1
σa f ⎝ f N (i)⎠ − ⎝ | f N (i)|⎠ (8)
C A C A
n(i)>at n(i)>at
where
Experiments are carried out in MATLAB 2014b [10] framework. Carrier frequency,
sampling rate, and symbols rate are taken as 1200 Hz, 15,0000 Hz, and 1500 Hz,
respectively. Though the variation in these values was also supported, to verify and
confirm with [2, 5, 8] the above values were taken. Results found demonstrated signal
classification is possible even without γmax , if SNR is above 18 dB. Therefore, 25 dB
SNR was chosen for threshold fixation using varying number of symbols per frame.
Initially frames with 21 symbols were tested.
438 S. S. Mathad and C. Vijay
Fig. 3 γmax versus frames at 25 dB SNR. a With 21 symbols per frame. b With 2048 symbols per
frame
Figure 3a shows that with less number of symbols being 21 will be insufficient
to arrive at threshold for γmax , which could bifurcate contenders considered. Thus,
the large set of symbols such as 65,536 symbols are generated and Ns or symbols
per frame are taken as large as 2048. The results accomplished are shown in Fig. 3b.
We can clearly segregate the intercepted signal into amplitude variant and amplitude
invariant. Comparing γmax with symbols per frame Ns as 21 and 2048 symbols; it
has created a unacceptable variation. Hence, γmax is eliminated from the decision
tree.
Modified Automatic Digital Modulation Recognizer … 439
Fig. 4 σaa versus frames at 25 dB SNR. a With 21 symbols per frame. b With 2048 symbols per
frame
Fig. 5 σdp versus frames at 25 dB SNR. a With 21 symbols per frame. b With 2048 symbols per
frame
Results in Fig. 4b discloses that symbols per frame Ns 2048 serve as ample data for
threshold fixation. Figure 4b will give a clear split among amplitude variants (ASK2,
ASK4) with amplitude invariants (PSK2, PSK4, FSK2, and FSK4) with a threshold
tσaa for σaa as 0.95. Further, threshold tσd p for σdp is fixed as 0.6 to differentiate
between ASK2 and ASK4 as shown Fig. 5b.
σap is used to distinguish between phase variant and phase invariant signals. In
the decision tree adopted, PSK2, PSK4 are differentiated from FSK2 and FSK4.
The threshold for segregation is tσap is selected as 0.1 PSK2 and PSK4 will have
value less than 0.1 as instilled by Fig. 6b. In continuation, two thresholds for σ f a ,
namely tσ f a1 for separation of PSK2 from PSK4 and tσ f a2 are chosen, respectively,
for categorizing FSK2 and FSK4.
Value for tσa f 1 is fixed as 0.2, σa f below 0.2 represents PSK4 else decision is made
as PSK2. tσa f 2 demarks between FSK2 and FSK4 and with the observation made in
Fig. 7b value is fixed as 2.5. If σa f is above 2.5 it indicates the intercepted signal is
either FSK2 or FSK4.
440 S. S. Mathad and C. Vijay
Fig. 6 σap versus frames at 25 dB SNR. a With 21 symbols per frame. b With 2048 symbols per
frame
Fig. 7 σa f versus frames at 25 dB SNR. a With 21 symbols per frame. (b) With 2048 symbols per
frame
5 Conclusion
The paper attempts to target to recognize the modulation schemes of the intercepted
signal. The experimentation has been carried out to identify the schemes of digital
modulation out of six contenders namely ASK2, ASK4, PSK2, PSK4, FSK2, and
Modified Automatic Digital Modulation Recognizer … 441
Fig. 8 Accuracy versus SNR (in dB). a With frame size as 256 symbols. b With frame size as 2048
symbols
FSK4. The experimentation has strict assumption that the intercepted signal is one
of these. Inconsistent parameter such as γmax has been eliminated from the decision
tree. In spite of eliminating one of the parameters and incorporating two thresholds
for σa f , the accuracy of detection, for SNR value above 20 dB is cent percent. The
paper exposes a minimum frame size is 1024 symbols per frame; without which
detection accuracy starts deteriorating though SNR is above 20 dB.
References
on a comparative wafer. The uneasiness bars are used as a piece of the arrangement
remembering the ultimate objective to sidestep the fastening factor. As demonstrated
by the arrangement in the paper the switch is worked, duplicated, made and depicted
in the K-band extent of a repeat. The most stress is found in this layout. The sketched
out switch portrays a diminishment in the full repeat with respect to increase in the
floating metal by researching the arrangement versatility in this paper.
1 Introduction
RF MEMS switches were widely used in the telecommunication and satellite appli-
cation field as they have high isolation loss at microwave frequency range. When
compared to all other switches the RF MEMS switches works at a large range of
frequencies that are around 0.1–100 GHz with less consumption of power, high iso-
lation loss, fewer insertion losses at reasonable cost [1]. Dielectric on Metal type
switches are the most common type of capacitive shunt switches in RF MEMS tech-
nology, where the top electrode is the beam, and the bottom electrode is the signal
line. The dielectric layer is deposited on the signal line which forms a DOM switch.
Some of the disadvantages of DOM switches are as follows, the dimensions of the
switch are to be manipulated frequently and the switch should be redesigned for
operating at a different range of frequencies. Imperfect interface roughness at the
contacting surface of the movable beam and dielectric layer. As a result, the down-
state capacitance reduces. This reduction causes the increase in resonant frequency
than required. To overcome all the drawbacks a metallic-insulator (MIM) capacitor
based switch is designed. The surface radar system is the most widely used system
for airport surveillance system. The surface radar system is used as primary radar
that provides surveillance cover for the maneuvering area, which is defined as that
used for the take-off, landing, and taxing of aircraft excluding aprons.
The fabrication of the switch is done on a 400 µm thick silicon wafer. The material
used for the dielectric is a silicon wafer of 200 nm thickness. The operation frequency
for this kind of design is around 0.5–6 GHz which is unsuitable for a higher band of
frequencies than mentioned above range [2].
The dielectric material used at K-band frequencies named strontium titanate oxide
(SrTiO3 ) is taken for the dielectric layer in the MIM capacitor. The insertion loss
of the switch is around 0.08 dB at 10 GHz frequency range. As the decrease in
the isolation losses is more than 5 GHz the design is not suitable and the dielectric
material of high frequency is required [3].
To get the low upstate capacitance the light emission switch has been adjusted. The
dielectric of Ta2 O5 of 200 nm thickness is utilized. The disengagement of −25 dB
is gotten at 15 GHz of recurrence run which is the significant burden of this outline.
The principle perception is that there is a high change in the confinement and the
Design and Simulation of a MIM Capacitor Type … 445
thunderous recurrence of results acquired from reproduction and the outcomes from
the equipment gadgets [4, 5].
In this MIM switch, the dielectric layer is made up of SiO2 of 90 nm thickness.
The magnitude of insertion loss obtained is around <0.3 dB and is obtained up to a
20 GHz range of frequency. The disadvantages simultaneously increase in case of
isolation [6, 7].
In this paper, we have designed and simulated MIM Capacitor type MEMS Switch.
The proposed design is showing the isolation of −80 dB at a frequency of 30 GHz
and the major advantage in the MIM capacitor is, due to the presence of the floating
material the device structure need not be changed all the time whenever there is a
change in the gap between the two plates. This is the major advantage of the utilizing
the floating plate in the MIM capacitor [8–10]. The device is used majorly in the
field of the Radar Application.
The proposed model consists of a silicon substrate with a dielectric made of silicon
nitrate material. Then a ground plane is considered on the two sides of the dielectric
covered with a certain height and the signal line is placed in the middle of the
two ground planes. A gap is considered for the signal line with respect to the ground
planes on either side. Another two rectangular blocks are taken on two ground planes.
A rectangular block is subtracted from the ground plane, as a result, it forms the
rectangular shape defect on the ground plane. Now a dielectric is considered on
the signal line with a certain height. Then with a gap of 3 µm, a bridge is taken
with a certain height. The support for the bridge is given from the defected area of
the ground plane. From each end of the bridge a spring is considered that exactly
coincides with the bridge height and supporting beams for the bridge are taken on the
ground planes. Perforations are made on the floating bridge to increase the sensitivity.
Baseless Mouse Implementation using ADXL345 is given in Fig. 1.
3 Fabrication Flow
The fabrication flow of the proposed RF MEMS switch conveys how the complex
MEMS structures can easily fabricate. The process flow is similar to the standard
fabrication steps where the surface micromachining technique is used to fabricate
this structure.
Figure 2a shows the polished upper surface of the silicon substrate. The Si3 NO2 is
deposited with 0.5 µm thickness over the substrate which acts a Substrate dielectric
which is shown in Fig. 2b. Figure 2c shows the deposition of CPW over substrate
dielectric. Figure 2d reports the addition of a sacrificial layer to add a dielectric layer
over the signal line followed by Fig. 2e that highlights the additional dielectric layer
446 A. Susmitha et al.
over the signal line. Figure 2f shows the etching of the sacrificial layer. Figure 2g
shows the deposition of the sacrificial layer for the formation of floating metal.
Figure 2h shows deposition of floating metal, i.e., gold material over the sacrificial
layer. At last, the sacrificial layer is removed by etching or DRIE method. The
suspended membrane can be seen in Fig. 2i.
4 Design Flexibility
The plan adaptability gives the data about the proficiency of the switch in mimicking
at all scope of frequencies without changing the structure of the outline. Here in
this paper, the real parameter on which the capacitance esteem depends is the full
recurrence which is acquired from the accompanying condition given underneath.
1
fr √ , (1)
2π L b Cdm
where
fr is the resonant frequency
Lb is the inductance
Cdm is the downstate capacitance
The inductance in the design is developed between the beam and the ground plate.
The capacitance is developed in between the beam and the signal line. The value of
the downstate capacitance is calculated and obtained a value of 0.7 pF by the Eq. (2)
ε0 εr Amc
CdM , (2)
g0
Design and Simulation of a MIM Capacitor Type … 447
where,
10 is the permittivity of the air medium
1r is the permittivity of the silicon nitride dielectric medium
g0 is the gap between the dielectric and floating metal
The upstate capacitance is the capacitance developed when the switch is in ON
state. The value of the upstate capacitance is calculated and has obtained the value
of 0.033 pF by the Eq. (3).
ε0 εr 1 A Mc
Cum , (3)
g0 + εtrd2
where,
10 is the permittivity of the air medium
1r1 is the permittivity of the air medium
1r2 is the permittivity of the silicon nitride dielectric medium
g0 is the gap between the dielectric and floating metal
td is the thickness of the dielectric medium
In the above-mentioned equation, the Amc is the overlap area of the dielectric
electrode and the signal line as given in the Eq. (4). The width of the overlap area
is the width of the signal line and the length of the overlap area is the length of the
dielectric electrode. By calculating the value of the overlap area and substituting in
the above equation the down capacitance is obtained.
A Mc L Mc W Mc (4)
The spring constant is an important parameter to obtain pull-in voltage. The spring
constant is obtained by simulating the model using solid mechanics fem tool as shown
in Fig. 3. The force is applied to the floating metal such that the corresponding
displacement has been observed. The ratio of force to the displacement results in
spring constant as shown in Fig. 4.
Design and Simulation of a MIM Capacitor Type … 449
Fig. 3 Displacement along Z axis by applying force. The obtained spring constant is 0.56 N/m
The electrostatic force is generated in between floating metal and signal line which
pulls the floating metal towards the signal line. The pull-in voltage is the minimum
voltage required to pull the floating metal by two-third of the gap.
The pull-in voltage is obtained by the equation
8K g03
Vp (5)
27ε0 A Mc
450 A. Susmitha et al.
By simulating the proposed design using fem tool, the obtained pull-in voltage is
7.2 V at a spring constant. By applying a voltage of 7.2 V the displacement obtained
was 2.9 µm is shown in Fig. 5.
RF analysis of the designed MIM capacitor has been done by using the Ansoft HFSS.
Generally, for a wide range of frequencies, the scattering parameters are essential.
The insertion and the return loss can be obtained when the switch is in ON state,
i.e., when a voltage is not applied. The isolation loss is obtained during OFF state of
the switch such that the beam is displaced 2 µm completely by applying the pull-in
voltage and the result is shown in Fig. 6. Figures 7 and 8 shown insertion loss and
isolation lost.
See Fig. 7.
Design and Simulation of a MIM Capacitor Type … 451
Fig. 6 At 5–40 GHZ of frequency, we have achieved a return loss of less than 10 dB in magnitude
Fig. 7 At 5 GHZ of frequency we achieved an insertion loss of −55 dB and at 40 GHz of frequency,
we have achieved insertion loss of less than −35 dB
See Fig. 8.
452 A. Susmitha et al.
6 Conclusion
In this paper, the MIM capacitor is designed and simulated using HFSS and COMSOL
tools and has obtained the isolation loss of −82 dB at 39 GHz range of frequency,
insertion loss of −55 dB at 5 GHz and −35 dB at 40 GHz range of frequency and a
return loss of 0.01 dB in magnitude for 5–40 GHz range of frequency using HFSS
tool. Also obtained the displacement of 2.94 µm by applying a force of 2 µN using
the COMSOL tool.
Acknowledgements Authors would like to thank NPMASS for providing necessary FEM tool.
The Authors would like to thank SERB (Science and Engineering Research Board), Govt. of India,
New Delhi, for providing partial financial support to carry out this research work under ECRA
Scheme (File No: SERB/ECR/2016/000757).
References
1. Rebeiz GM, Muldavin JB (2001) RF MEMS switches and switch circuits. IEEE Microwave
Mag 2(4):59–71
2. Gopalakrishnan S, Dasgupta A, Nair DR (2016) Study of the effect of surface roughness on
the performance of RF MEMS capacitive switches through 3-d geometric modeling. IEEE J
Elect Devices Soc 4(6):451458
3. Rizk JB, Rebeiz GM (2002) Digital-type RF MEMS switched capacitors. In: 2002 IEEE MTT-S
International on microwave symposium digest, vol 2, pp 1217–1220
4. Kim JY, Chung GH, Bu KW, Park JU (2001) Monolithically integrated microwave RF MEMS
capacitive switches. Sens Actuat 89:88–94
5. Jansen H, Nauwelaers B, Fiorini P, De Raedt W, Tilmans HAC, Rottenberg X (2002) Boosted
RF-MEMS capacitive shunt switches. In: Workshop on semiconductor sensor and actuator
technology (3rd SeSens), pp 667–671
6. Rottenberg X, Jansen H, Fiorini P, De Raedt W, Tilmans HAC (2002) Novel RF-MEMS capac-
itive switching structures. In: 2002 on 32nd European on microwave conference, pp 1–4
7. Solazzi F, Palego C, Molinero D, Farinelli P, Colpo S, Hwang JCM, Margesin B, Sorrentino
R (2012) High-power high-contrast RF MEMS capacitive switch. In: 2012 7th European on
microwave integrated circuits conference (EuMIC), pp 32–35
8. Giacomozzi F, Papandreou E, Margesin B, Papaioannou G (2011) Floating electrode micro-
electromechanical system capacitive switches: a different actuation mechanism. Appl Phys
Lett 99:073501
9. Rebeiz GM (2003) RF MEMS: theory design and technology. Wiley, Hoboken, New Jersey
10. Simons RN (2001) Coplanar waveguide circuits, components, and systems. Wiley, New York
Comparative Study of Soft Computing
Based High-Resolution Satellite Image
Segmentation in Additive
and User-Oriented Color Space
P. Ganesan (B)
Department of Electronics and Communication Engineering, Vidya Jyothi
Institute of Technology, Aziz Nagar, C.B. Post, Hyderabad, India
e-mail: gganeshnathan@gmail.com
L. M. I. Leo Joseph · B. Girirajan
Department of Electronics and Communication Engineering, S.R. Engineering College,
Warangal, India
e-mail: leojoseph@srecwarangal.ac.in
B. Girirajan
e-mail: girirajan_b@srecwarangal.ac.in
V. Kalist
Faculty of Electrical and Electronics, Sathyabama University, Chennai, India
e-mail: kalist.v@gmail.com
1 Introduction
Fuzzy C-Means (FCM) clustering segregated a data set which is a group of data
points (For example, pixels in an image) into c number of fuzzy clusters [13]. The
objective function of FCM is presented in (4)
c
n
Jm (U, V) ik xk − vi ,
μm 2
(4)
i1 k1
where m weighting exponent factor [14]. The cluster centers and membership is
computed as
n
k1 μik Xk
m
Vi n (5)
k1 μik
m
⎧ ⎫
⎨ c 2/(m−1) ⎬−1
xk − vi
μik (6)
⎩ xk − vj ⎭
j1
n
c
c
n
Pm (T , V ; X , γ) tikm dki2 + γi (1 − tki )m , (7)
i1 k1 i1 k1
where γ weighting exponent factor [15]. Equations (8) and (9) illustrates two nec-
essary conditions for the objective function to achieve the ultimate value [16].
dik 1/m−1
tki 1/ 1 + , 1 ≤ i ≤ c; 1 ≤ k ≤ n (8)
γi
n
xk tkim
vi k1
n m (9)
k1 tki
n
c
m η
PFm (T , V, U ; X , γ) aμik + btik dki2
i1 k1
c
n
+ γi (1 − tki )η (10)
i1 k1
c
n
ik Wji Xk − Vi
Um m 2
JMod = (14)
k1 i1
The image acquired from the satellite is larger in size even after preprocessing. The
enhancement processes are used to improve the visibility of the image by remov-
ing the unnecessary (noisy) pixels and sharpening the fine and minute details. The
enhancement process can be performed either in spatial or frequency domain.
The enhanced images are in RGB color space. In this device dependent color
space, the color information (chrominance) and intensity information (luminance)
are mixed. In this work, the image has been transformed from RGB color model to
another for the effective segmentation. The transformed image had segmented using
various soft computing approaches. Then the segmented images had evaluated using
16 image quality measures.
Comparative Study of Soft Computing Based High-Resolution … 457
This work has been applied for oil spill detection in ocean and sea, land cover
classification, detection of changes in water resources and landscapes, and forest
fire segmentation. In this work, the satellite images are segmented using many soft
computing approaches. The course of action for the recognition of ROI in satellite
image is elucidated as follows.
Step 1: Acquire the satellite image from database.
Step 2: Perform preprocessing on the test image to enhance its fine details and
filter out unnecessary information. This enhancement process includes
smoothening and sharpening.
Step 3: Transform the satellite images in RGB color space into other color spaces.
Step 4: Apply soft computing based segmentation techniques, discussed in Sect. 3,
to perform the segmentation.
Step 5: Evaluate the segmentation process using image quality measures.
All soft computing based segmentation techniques have carried out using Intel
core i3 computer of 2 GB RAM. Though the results of the segmentation techniques
on two images are analyzed in this chapter, the techniques work equally well on high
and medium resolution satellite Images and the results are found to be consistent.
The original size of the satellite images has resized to reduce the computational cost.
The two images used in this chapter is,
• Ferrari World, Abu Dhabi, UAE, captured by Geo Eye-1 satellite sensor on April
22, 2011.
• Rim and American Fires (RIM Fire), eastern stretch of central California, captured
by Landsat Enhanced Thermatic Mapper plus (ETM+) on August 23, 2013.
The main function of the clustering algorithms is to determine the cluster centers
and assign each pixel to its nearest neighboring cluster centers. From Table 1, the
following inferences are made.
• For fuzzy-based techniques, RGB color space image is grouped into three and four
clusters respectively.
• For SOM, the image is split into only two clusters.
• The execution time for PFCM is very less as compared other methods. This is
followed by PCM, SOM, MFCM, and FCM.
The following conclusions are drawn for the hue-based color spaces from Table 2.
• For fuzzy-based techniques, Ferrari World satellite image is segmented into three
clusters.
458 P. Ganesan et al.
Table 2 Evaluation of performance measures of Ferrari World satellite image segmentation in HSI
color space
Technique No. of cluster Cluster centers No. of Execution
iteration time (in s)
FCM 3 153.8753, 35.17152, 155.2568 15 9.2662
236.1077, 216.7481, 217.8645
86.29749, 17.22301, 115.0743
PCM 3 254.5985, 245.2618, 254.4589 14 3.566
186.2495, 111.9351, 124.912
110.0633, 15.97719, 136.9259
MFCM 3 157.4876, 41.65876, 155.3853 15 7.6698
250.5735, 240.2689, 245.7919
82.46453, 15.59788, 110.3529
PFCM 3 150.6096, 18.01974, 162.1574 15 0.022769
254.9349, 243.1841, 254.8959
81.71241, 12.59561, 115.2102
SOM 2 209.3349, 165.3243, 163.0133 13 4.4904
115.2915, 26.7603, 117.1357
Fig. 1 Fuzzy and SOM-based segmentation result for Ferrari World satellite image in RGB color
space
Figure 1 and Fig. 2 demonstrate Fuzzy and SOM-based segmentation result for
Ferrari World satellite image in RGB color space and HIS color space respectively.
Figure 3 demonstrates the outcome of Fuzzy and Neural network based segmentation
techniques for RIM Fire satellite image RGB color space. The analysis of RIM Fire
satellite image segmentation in RGB color space is shown in Table 3.
• Here the image is grouped into four clusters for fuzzy-based techniques.
• In SOM, the image is clustered into only two groups.
• PFCM has very less execution time as compared other techniques.
• This is followed by SOM, PCM, MFCM, and FCM.
460 P. Ganesan et al.
Fig. 2 Fuzzy and SOM-based segmentation result for Ferrari World satellite image in HSI color
space
From the experimental result of hue-based color spaces (Table 4), the following
conclusions are made.
• For fuzzy-based techniques, the image is segmented into three clusters.
• In SOM, the image is divided into only two clusters.
• PFCM has very less execution time as compared to other techniques.
• For HSI and HSV color space, all five techniques have lesser execution time as
compared to RGB color space (Fig. 4).
Comparative Study of Soft Computing Based High-Resolution … 461
Fig. 3 Fuzzy and SOM-based segmentation result for RIM Fire satellite image in RGB color space
Fig. 4 Fuzzy and SOM-based segmentation for RIM Fire satellite image in HSI color space
462 P. Ganesan et al.
Table 3 Evaluation of performance measures of RIM fire satellite image segmentation in RGB
color space
Technique No. of Cluster centers No. of Execution
clusters iteration time
FCM 4 161.5433, 156.5696,129.2729 15 18.1631
104.1202, 123.7535, 78.30154
50.22397, 110.1458, 31.70726
179.3456, 206.2967, 203.0115
PCM 4 143.1303, 161.2407, 144.4983 15 8.2939
101.6234, 126.4213, 68.71141
43.59554, 104.9613, 28.58328
208.9596, 201.5896, 182.3449
MFCM 4 170.9216, 158.9087, 127.407 15 9.2963
100.5091, 126.6759, 74.8224
44.71518, 99.1069, 30.6859
156.8755, 207.2242, 215.517
PFCM 4 163.3401, 152.7199, 126.9097 15 0.1837
104.818, 121.6295, 76.86739
48.53984, 117.2822, 29.95883
193.3436, 195.3339, 186.6962
SOM 2 172.9243, 170.3829, 159.9968 8 6.915
109.2854, 99.44422, 67.81135
Table 4 Evaluation of performance measures of RIM fire satellite image segmentation in HSI color
space
Technique No. of cluster Cluster centers No. of Execution
iteration time
FCM 3 166.0441, 56.37428, 147.8378 15 12.59
75.56924, 74.46422, 144.8156
115.8986, 74.90158, 79.13323
PCM 3 163.4761, 55.38775, 144.6972 15 7.242
54.55343, 93.09554, 163.8428
111.7345, 71.40764, 77.04312
MFCM 3 153.3824, 49.07469, 115.4724 15 8.106
173.0113, 66.7522, 180.9075
91.50086, 78.56671, 99.65019
PFCM 3 166.7797, 56.94499, 144.3765 15 0.048
68.02347, 70.06006, 131.4494
116.4233, 79.36888, 74.36983
SOM 2 172.0318, 57.60199, 133.2721 10 6.6649
59.0387, 86.65266, 181.1722
6 Conclusion
This work analyzed the performance of satellite image segmentation in RGB and
HSI color space using fuzzy and neural network based soft computing techniques.
The performance of HSI color space based segmentation outperforms RGB-based
Comparative Study of Soft Computing Based High-Resolution … 463
segmentation. On the basis of this work, the following conclusions are drawn. MFCM
method had produced the good cluster centers and excellent result. The computational
cost is very less for PFCM segmentation followed by PCM, SOM, and MFCM
segmentation. The execution time depends on the number of clusters, the complexity
of the image, and color space.
References
1. Sheikh HR, Sabir MF, Bovik AC (2006) A statistical evaluation of recent full reference image
quality assessment algorithms. IEEE Trans Image Process 15:3440–3451
2. Kurnaz M, Dokur Z, Olmez T (2006) Segmentation of remote sensing images by incremental
neural network. Comput J Pattern Recogn Lett 26:1096–1104
3. Awad M (2011) An unsupervised artificial neural network method for satellite image segmen-
tation. Int Arab J Informat Technol 7:199–205
4. Thoonen G, Mahmood Z, Peeters S, Scheunders P (2012) Multisource classification of color
and hyper spectral images using color attribute profiles and composite decision fusion. IEEE
J Select Topics Appl Earth Observat Remote Sensing 5:510–523
5. Ibraheem Noor A, Hasan Mokhtar M, Khan Rafiqul Z, Mishra Pramod K (2011) Understanding
color models: a review. ARPN J Sci Technol 2:265–275
6. Ganesan P, Rajini V (2014) HSV color space based segmentation of region of interest in satellite
images. In: IEEE international conference on control, instrumentation, communication and
computational technologies (ICCICCT), pp 101–105
7. Sajiv G (2015) Unsupervised clustering of satellite images in CIELab color space using spatial
information incorporated FCM clustering method. Int J Appl Eng Res 10(20)
8. Ganesan P, Rajini V (2010) Segmentation and edge detection of color images using CIELab
color space and edge detectors. In: 2010 IEEE international conference on emerging trends in
robotics and communication technologies (INTERACT), pp 393–397
9. Kwok NM, Ha QP, Fang G (2011) Effect of color space on color image segmentation. In: 2nd
IEEE international congress on image and signal processing, pp 1–5
10. Shaik KB, Ganesan P, Kalist V, Sathish BS (2015) Comparative study of skin color detection
and segmentation in HSV and YCbCr color space. Proc Comput Sci 57:41–48
11. Kalist V, Ganesan P, Sathish BS, Jenitha JMM (2015) Possiblistic-Fuzzy C-means cluster-
ing approach for the segmentation of satellite images in HSL color space. Proc Comput Sci
57:49–56
12. Paschos G (2013) Perceptually uniform color spaces for color texture analysis: an empirical
evaluation. IEEE Trans Image Process 10:932–937
13. Zaixin Z, Lizhi C, Guangquan C (2014) Neighbourhood weighted fuzzy c-means clustering
algorithm for image segmentation. IET Image Proc 8(3):150–161
14. Ganesan P, Rajini V (2010) A method to segment color images based on modified fuzzy
possibilisctic c-means clustering algorithm. RSTSCC 157–163
15. Pal NR, Pal K, Bezdek JC (2005) A possibilistic Fuzzy C means clustering algorithm. IEEE
Trans Fuzzy Syst 13:517–530
16. Krishnapuram R, Keller J (1996) A possibilistic approach to clustering. IEEE Trans Fuzzy Syst
1:98–110
Design and Implementation of Indoor
Tracking System Using Inertial Sensor
1 Introduction
Inertial tracking is generally detecting and plotting the position of the person by
measuring inertia. The purpose of the indoor tracking system is to provide the posi-
tion of the person by using MEMS inertial sensors. MEMS sensor and actuators
have been recommended for a wide variety of applications such as aerospace engi-
neering, biomedical, chemical analysis, communications, scanning, display, optics
[1], sensors [2] etc. MEMS technology is adopted in all areas since it is sharp,
D. Sindhanaiselvi
Pondicherry Engineering College, Pillaichavadi 605014, Puducherry, India
e-mail: sindhanaiselvi@pec.edu
T. Shanmuganantham (B)
Pondicherry Central University, Kalapet 605014, Puducherry, India
e-mail: shanmugananthamster@gmail.com
2 Block Diagram
The proposed foot-mounted tracking system consists of an inertial sensor, the con-
troller to get the sensor data, transmitting module and the receiving module con-
nected to pc/laptop. The overall block diagram is shown in Fig. 1. The transmit-
ting side is mounted on the foot of the person to be tracked. Transmitting cir-
cuit has a sensor (MPU-6050) connected to a controller (MPU-6050—ARDUINO
NANO interface) and to ZigBee transmitter module for wireless transmission
(ARDUINO NANO—ZIGBEE (transmitter) interface). The receiving side has the
simple ZIGBEE receiver interfaced to the pc/laptop (ZIGBEE (receiver)—PC inter-
face). Arduino nano with ATMEGA328-IC is used for I2C communication with
MPU6050. PC is used for data acquisition and for plotting the sensor data in MAT-
LAB. Arduino is then serially connected to ZIGBEE module, which is used as the
transmitter. Inertial sensors are sensors based on measuring inertia.
The MPU-60X0 is the integrated 6-axis Motion Tracking device that combines
a three-axis gyroscope, three-axis accelerometer, and a Digital Motion Processor
(DMP) all in a very small package. It accepts input from an external three-axis
compass with the help of I2C sensor bus and provides a complete nine-axis motion
fusion output.
Design and Implementation of Indoor Tracking System … 467
The inertial sensor MPU 6050 communicates with the controller through the I2C
protocol as shown in the schematic diagram of Fig. 2. The sensor is connected to the
controller via 5 V pin.
The schematic diagram in Fig. 3 shows the serial communication established between
Arduino nano and ZIGBEE by connecting the TX of Arduino to RX of ZIGBEE and
RX of Arduino to TX of ZIGBEE, Vcc of ZIGBEE to +3.3 V and gnd to common
ground.
3 Software Implementation
The sensor data from the inertial sensor is read by using the following algorithm with
serial communication as shown in the flowchart of Fig. 4.
Major MATLAB programming involves in three stages. The main script is for
receiving the sensor data and two other main functions. One function is to detect the
position of the person with AHRS algorithm and other function to create a 3D plot for
position tracking of the person. An altitude and heading reference system (AHRS)
consists of sensors information on three axes to replace traditional gyroscope and
provide superior reliability and accuracy. The Flowchart for the receiver side is shown
in Fig. 5.
4 Experimental Setup
The proposed foot-mounted tracking system is divided into two stages. The first
stage is transmitter side which includes a power supply, MPU6050, Arduino nano,
ZIGBEE transmitter setup as in Fig. 6 and second stage is receiver side setup includes
ZIGBEE receiver and pc or laptop as in Fig. 7.
Figure 8 shows the foot-mounted tracking system experimental setup. MPU6050
sends the data set of accelerometer and gyroscope values on person‘s movement.
When the person is in motion the acceleration and orientation varies, with that data,
the position of the person can be plotted in MATLAB.
The sensor data is acquired using MATLAB data array and plotted against time in
seconds as in Fig. 9 with acceleration versus time and angular velocity versus time.
From the acceleration, velocity is calculated and the errors are eliminated [8], drift
and orientations are further integrated to compute the position of the person. Final
tracking can be visualized using MATLAB. 3D view is created to plot the position
of the person. Figure 10 shows the tracking of position in 3D.
470 D. Sindhanaiselvi and T. Shanmuganantham
6 Conclusion
The experimental setup model for indoor positioning based on foot-mounted MEMS
sensor is presented. This acceleration is converted into translational velocity and
position by MATLAB coding. The design of high pass and low pass filter to eliminate
systematic errors caused by a person standing still is included in MATLAB coding.
The experiment results of the sample trajectories showed that the proposed algorithm
is accurate to compute the position in three axes (x, y, and z).
472 D. Sindhanaiselvi and T. Shanmuganantham
Acknowledgements The authors would like to express sincere gratitude to Pondicherry Engi-
neering College for providing the facilities to do experimentation for this research work and also
Exemption from review Proposals which present less than minimal risk fall under ICMR guidelines.
References
1 Introduction
Brushless direct current motor has a wide range of applications because of its advan-
tages such as high efficiency, flat speed-torque characteristics, and high-speed range,
smaller in size and lighter, longer life, low noise and good dynamic response when
compared against the brushed direct current motor. By moving permanent magnets
to rotor and driving field coils with power electronic switch can eliminate brushes
in dc motor. BLDC motors are often called as electronically commutated motors. It
requires some electronic control mechanism to determine the position of the rotor
continuously. The position of the rotor can be decided either by measuring changes
in back emf at each of the armature coils known as sensorless control [1–3] or by
using a Hall Effect sensor. As sensors cannot be used in all applications, the back emf
method of computation is used to simplify the motor construction and can reduce
the cost of the motor. PID controller is used here due to its simple structure and ease
of operation [4–6].
Particle swarm algorithm is considered as a most promising population-based
algorithm for solving numerical optimization problems, it belongs to swarm intelli-
gence method. Nature-inspired algorithms can adapt solutions to the changing cir-
cumstances as the operation of BLDC motor involves nonlinear constraints and
non-stationary conditions [7]. A constriction factor, chaotic inertia weight, and expo-
nential inertia weight are incorporated into original PSO algorithm to intensify the
velocity of the particle used in PSO technique and to ensure convergence [8, 9].
BLDC motor drive is shown in Fig. 1, which consists of BLDC motor, voltage source
inverter, hysteresis current controller, reference current generator and PID control.
BLDC motor is supplied by voltage source inverter [10]. The switching functions
obtained from the hysteresis current controller serves as input to the voltage source
inverter. Reference currents generated by the reference current generator serves as
input to the hysteresis current controller. Based on the position of the rotor, reference
currents iaref , ibref , icref are obtained by value of the reference current magnitude as
iref . The actual speed of the motor is compared with the reference speed and error
is applied to the controller and its output serves as the input to the reference current
generator.
Inertia Weight Strategies in PSO for BLDC Motor Drive Control 477
TJ + TD + TS + TL Te (2)
where, (E = Ke ωm ).
478 M. K. Merugumalla and P. K. Navuri
3 Objective Function
The class of population algorithms such as Particle swarm optimization, bat and
firefly algorithms for solving global optimization problem is often called as Nature-
inspired algorithms. They offer many advantages to the difficult optimization tasks.
In contrast to the classical methods, nature-inspired algorithms can adapt solutions to
the changing circumstances and gives a robust response to changing circumstances.
PSO algorithm is a population optimization technique developed for solving complex
optimization problems [11, 12].
Let a swarm have “n” number of particles in a d-dimensional search space, the
position of an ith particle at the tth iteration can be expressed as:
t t
xit xi1 , xi2 , . . . , xid
t
(7)
and the best past position(pbest) of the ith particle can be expressed as
pbestit pbesti1
t
, pbesti2
t
, . . . , pbestidt (8)
Calculating the objective function of each particle and after comparison the pbest
of the current iteration is to be recorded as
⎧
⎨ pbest t if f t+1 ≥ f t
i i i
pbestit+1 (9)
⎩ xt+1 if f t+1 ≤ f t
i i i
Inertia Weight Strategies in PSO for BLDC Motor Drive Control 479
The best position among all the particles (gbest) can be expressed as
gbestit gbesti1
t
, gbesti2
t
, . . . , gbestidt (10)
Calculation of global best, i.e., the objective function associated with pbest among
n number of particles is compared with previous iteration and the minimum among
all is recorded as the current overall gbest
⎧
⎨ gbest t if f t+1 ≥ f t
i i i
pbestit+1 (11)
⎩ pbest t+1 if f t+1 ≤ f t
i i i
Each particle changes its position in search space according to its current velocity
as
t
t+1
xid xid t+1
+ vid , (13)
t
where xid is the current position of particle i at iteration t.
To update velocity of each particle in the population, there are numbers of pos-
sibilities to enhance the velocity of the particle used in this optimization technique.
Some methods are proposed here.
The original PSO algorithm does not ensure the convergence of a particle towards its
attractors. Clerc and Kennedy proposed constriction methodology to ensure conver-
gence and to improve the searchability [12]. The main advantage of this methodology
is that without changing the basic equation, a constriction factor (χ) based on accel-
eration constants C1 , C2 is incorporated in the original PSO algorithm to update the
velocity.
2
Constriction factor (χ ) √ (14)
2−∅− ∅2 − 4∅
where ∅ c1 + c2 , ∅ > 4.
The velocity of the ith particle with constriction factor (χ) in the original PSO
algorithm can be expressed as:
t+1
vid χ [vid
t
+ c1 rand1 pbestid − xid
t
+ c2 rand2 gbestd − xid
t
] (15)
480 M. K. Merugumalla and P. K. Navuri
z μ ∗ z(1 − z) (16)
z is an unknown number in the range [0,1], by taking μ 4, the chaotic result will
be in between 0 and 1. The chaotic inertia weight in terms of z can be written as
0.5(maxit − iter)
wc (17)
maxit + 0.4z
This chaotic inertia weight is introduced into the velocity equation of original
PSO algorithm as
t+1
vid [wc ∗ vid
t
+ c1 rand1 pbestid − xid
t
+ c2 rand2 gbestd − xid
t
]. (18)
To overcome later period oscillatory occurrences with the original PSO, standard
inertia weight (w) is multiplied by its exponent to produce an improved PSO.
This exponent of inertia weight is introduced into the velocity equation of standard
PSO algorithm as
t+1
vid [w ∗ exp(w) ∗ vid
t
+ c1 rand1 pbestid − xid
t
+ c2 rand2 gbestd − xid
t
] (19)
Step 1. Read the system data, initialize the PSO parameters and random generation
of initial solutions by satisfying the constraints.
Step 2. Calculation of fitness value of an objective function using Eq. (6).
Step 3. Calculate pbest, compare with the previous iteration and select the lower
value of the objective function as pbest.
Step 4. Calculate gbest, i.e., pbest among all the particles in the current iteration,
compare with the previous iteration and select the lower value as overall gbest.
Step 5. After finding pbest, gbest update the velocity using Eqs. (15), (18), and (19)
for PSO-C, PSO-CIW and PSO-EIW respectively. Update the position using Eq. (13)
for every iteration.
Inertia Weight Strategies in PSO for BLDC Motor Drive Control 481
Fig. 2 Problem-specific
implementation flowchart of
PSO algorithms
0.15
0.1
0.05
0
-0.05
0 10 20 30 40 50
No.of iterations
(a) (b)
80 80
speed (rad/sec)
60 speed (rad/sec) 60
PSO-C REF.SPEED
REF.SPEED PSO-CIW
40 40
20 20
0 0
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)
(c) (d)
80 80
speed (rad/sec)
speed (rad/sec)
60 60 REF.SPEED
REF.SPEED PSO-EIW
PSO-EIW PSO-C
40 40 PSO-CIW
20 20
0 0
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)
Fig. 4 a Speed response of PSO-C. b Speed response of PSO-CIW. c Speed response of PSO-EIW.
d Comparison of speed responses of PSO-C, PSO-CIW, PSO-EIW
6 Conclusion
In this paper, several inertia weight strategies in PSO algorithms are proposed for
optimal control of the brushless direct current motor drive. The motor drive has
been simulated in MATLAB/ SIMULINK with parameter description shown in the
appendix. Several simulation tests were performed to investigate the convergence
and time domain performance parameters. The time domain parameters validate
the effectiveness of inertia weight strategies in PSO algorithms in reducing system
oscillations and with a low rise and settling time. Simulation results of three PSO
algorithms are compared and results confirm that exponential inertia weighted PSO
converges faster and constriction factor based PSO has very less convergence value.
Appendix
References
1. Merugumalla MK, Navuri PK (2017) Sensorless control of BLDC motor using bio-inspired
optimization algorithm and classical methods of tuning PID controller. J Instrument Control
Eng 5(1):16–23
2. Merugumalla MK, Prema Kumar N (inpress) FFA based speed control of BLDC motor drive.
Int J Intell Eng Informat
3. Kim TH, Ehsani M (2004) Sensorless control of the BLDC motors from near-zero to high
speeds. IEEE Trans Power Elect 19(6)
484 M. K. Merugumalla and P. K. Navuri
4. Alberto AP, Michael F, Chunjiang Q (2009) Particle swarm optimization for PID tuning of a
BLDC motor. In: International conference on systems, man and cybernetics, October 2009,
San Antonio, TX, USA, pp 3917–3922
5. Ilir J, Petraq M (2015) PID design with bio-inspired intelligent algorithms for high order
systems. Int J Math Comput Simulat 9:44–52
6. Altinoz OT, Erdem H (2015) Particle swarm optimization-based PID controller tuning for static
power converters. Int J Power Electron 7(1/2):16–35
7. Merugumalla MK, Prema Kumar N (2017) Optimized PID controller for BLDC motor using
nature-inspired algorithms. Int J Appl Eng Res 12(1):415–422
8. Eberhart RC, Shi Y (2000) Comparing inertia weight and constriction factor in particle swarm
optimization. Int J Power Electron 7(1/2):16–35
9. Huynh DC, Dunnigan MW (2012) Advanced particle swarm optimization algorithms for
parameter estimation of a single-phase induction machine. Int J Modell Identificat Control
15(4):227–240
10. Eti SL, Prema Kumar N (2014) Closed loop control of BLDC motor drive using adaptive fuzzy
tuned PI controller. J Eng Res Appl 4:93–104
11. Adewumi AO, Arasomwan MA (2016) On the performance of particle swarm optimisa-
tion with(out) some control parameters for global optimization. Int J Bio-Inspired Computat
8(1):14–32
12. Mauro SI, Johann R (2011) Particle swarm optimization with Inertia weigh and Constriction
factor. In: International conference on swarm intelligence. Cergy, France, pp 1–11
Beam Steering Cuboid Antenna Array
for L Band RADAR
Abstract In surveillance systems and electronic warfare systems radars with com-
plete 360° beam steering capabilities are essential. Antenna arrays which are capable
of beam steering in complete azimuth plane and elevation plane are required. For
this purpose, phased array antennas are been used but the complex feed network
used in phased array antenna will introduce adverse effects on antenna array per-
formance and there are limitations on the beam steering capabilities of the planar
antenna arrays because of their physical geometry. To overcome this limitation of
physical geometry in this paper, an L Band cuboid antenna array is presented for
the frequency of 1.35 GHz. Proposed antenna array does not need any complex feed
network and phase shifter circuits to steer the beam of the antenna array. Instead,
it takes advantage of the proposed cuboid geometry and full axis beam steering is
achieved by switching the antenna elements in the array. Simulated array patterns
like gain, return loss, mutual coupling, VSWR, and radiation pattern are explained
in detail.
S. Kanapala · N. A. Rao
Department of ECE, Vignan’s Foundation for Science, Technology & Research,
Vadlamudi, Guntur, Andhra Pradesh, India
e-mail: satishkanapala@gmail.com
N. A. Rao
e-mail: anandnelapati@gmail.com
M. Sekhar (B)
Acharya Nagarjuna University, Guntur, Andhra Pradesh, India
e-mail: sekhar.snha@gmail.com
1 Introduction
Antenna arrays with Non-planar geometry are useful for the applications where
the planar arrays cannot perform properly due to their limitations of the physical
structure. Non-planar antenna array geometries like a sphere, cone, cylinder, etc., are
considered for specific applications [1, 2]. In these arrays, beam steering is achieved
by switching the antenna elements according to the beam steering requirement. There
is no need of any complex feed network for beamforming in these antenna arrays and
also there is no need of any phase shifter circuit to provide different phase to different
antenna elements for achieving beam steering [3]. By eliminating feed network the
performance of the antenna also increases as there is no loss of antenna performance
due to the effects of mutual coupling and gain reduction [4, 5]. The antenna array
used for this case study consists of five antenna elements which are positioned on a
cuboid geometry. By switching the excitation for the 5 elements in a circular manner,
we can steer the beam generated by the array electronically in the azimuthal plane.
Five beams can be generated which can cover the complete azimuthal axis without
any intermediate perturbations in the coverage area. By implementing this cuboid
structure we can have beam steering in both azimuthal and elevation planes without
any complex feed networks. An L Band cuboid array antenna is designed for study
purpose. Various antenna array characteristics are been studied and are presented
below.
As explained above, the proposed antenna array has five single antenna elements
which are aligned such that they form a cuboid structure with one antenna element
for a single elevation plane. To reduce the effects of phase shifter circuits and feed
network, here we used microstrip antenna element with individual feed to each ele-
ment so that there will be no need for an external feed network for providing phase
shift and excitation [6, 7]. A square patch antenna fed with a coaxial feed is designed
using Ansys HFSS simulation software. The simulated single antenna element is
shown in Fig. 1. Low-cost FR4 substrate has been used to design a proposed antenna
which is having a thickness of 62 mils and having a length and width of 95 mm. The
square patch side length is considered to be 46.8 mm.
Obtained return loss plot for a single antenna element is as shown in Fig. 2. Here
observed a return loss of −29.44 dB
Beam Steering Cuboid Antenna Array for L Band Radar 487
Name X Y
XY Plot 1 HFSSDesign1
0.00
m1 1.3500 -29.4438 Curve Info
dB(S(1,1))
Setup1 : Sweep
-5.00
-10.00
dB(S(1,1))
-15.00
-20.00
-25.00
m1
-30.00
1.00 1.13 1.25 1.38 1.50 1.63 1.75
Freq [GHz]
To study various antenna parameters in the cuboid array geometry, related simu-
lations are carried out on the cuboid structured antenna array shown in Fig. 3.
Various antenna parameters like return loss, VSWR, radiation patterns, and antenna
array parameters like mutual coupling, beam steering of the proposed antenna are
presented in this section. A return loss value of less than −25 dB is achieved for
the antenna elements along with a VSWR value of 1.08 which represents that the
488 S. Kanapala et al.
XY Plot 1 HFSSDesign1
0.00
Curve Info
dB(S(1,1))
Setup1 : Sweep
dB(S(2,2))
Imported
-5.00 dB(S(3,3))
Imported
dB(S(4,4))
Imported
dB(S(5,5))
-10.00 Imported
Y1
-15.00
-20.00
-25.00
-30.00
1.00 1.13 1.25 1.38 1.50 1.63 1.75
Freq [GHz]
XY Plot 2 HFSSDesign1
120.00
Curve Info
Name X Y
VSWR(1)
100.00 m1 1.3500 1.0851 Setup1 : Sweep
VSWR(2)
Setup1 : Sweep
80.00 VSWR(3)
Setup1 : Sweep
VSWR(4)
Y1
20.00
m1
0.00
1.00 1.13 1.25 1.38 1.50 1.63 1.75
Freq [GHz]
Mutual coupling effect of the cuboid antenna array elements is shown in Fig. 6,
Observed a mutual coupling of −24 dB for the antenna elements which are in imme-
diate neighborhood of the antenna and a mutual coupling of −49 dB for the remaining
antenna elements. From the results, it is observed that a very low mutual coupling is
present in between the antenna elements [8–10].
Proposed cuboid antenna model achieve beam steering in both elevation plane
and azimuthal plane easily by taking advantage of its geometry. Here beam steering
can be achieved just by exciting the antenna elements in the direction of the beam
steering. Depending upon the requirement for beam steering sometimes we excite
single antenna element and sometimes we need to excite two antenna elements. In
Fig. 7 we can observe how beam steering can be achieved by switching the antenna
elements.
490 S. Kanapala et al.
XY Plot 1 HFSSDesign1
-20.00
Curve Info
dB(S(1,2))
-25.00
Setup1 : Sweep
dB(S(1,3))
-30.00 Setup1 : Sweep
dB(S(1,4))
-35.00 Setup1 : Sweep
dB(S(1,5))
Setup1 : Sweep
Y1
-40.00
-45.00
-50.00
-55.00
-60.00
1.00 1.13 1.25 1.38 1.50 1.63 1.75
Freq [GHz]
For the pattern shown in Fig. 7a, we need to excite the left antenna element alone.
For the pattern shown in Fig. 7b, we need to excite the left antenna element and
the top antenna element. For the pattern shown in Fig. 7c, we need to excite the top
antenna element alone. For the pattern shown in Fig. 7d, we need to excite both the
top antenna element and the right antenna element. For the pattern shown in Fig. 7e,
we need to excite the right antenna element alone. In this way, by simply switching
the antenna elements we can achieve beam steering without any phase shifter circuits
or complex feed networks for beamforming.
Figures 8 and 9 shows the plots of beam steering in both elevation plane and
azimuthal plane. From Fig. 8 we can observe that a beam steering of 180° can be
achieved and from Fig. 9 we can observe that a beam steering of 360° can be achieved.
In both the plots, we can observe that there is a slight perturbation in between the
beam steering patterns but there is no loss of gain in any beam angle.
These perturbations will be covered when we replace the single antenna elements
with antenna array where we will get more flexibility in exciting the antenna elements.
Here we considered the proposed model for a case study purpose but for the real-
time application, we require antennas with high gain and narrow beam width. For
the real-time application, these individual antenna elements should be replaced with
antenna arrays. In the figures below each colored plot represent a different case of
antenna element excitation. By using this geometry there will be a very less sidelobe
level and there will be no need for any preventive measures for the reduction of side
lobe level.
Beam Steering Cuboid Antenna Array for L Band Radar 491
0 0
-30 30 -30 30
(a) 3.50 (b) 0.00
-0.50 -5.00
-60 60 -60 60
-4.50 -10.00
-8.50 -15.00
-90 90 -90 90
0
0
(c) -30
5.00
30 (d) -30 30
-1.00
0.00
-7.00
-60 60 -60 60
-5.00 -13.00
-10.00 -19.00
-90 90 -90 90
0
-30 30
(e) 5.00
0.00
-60 60
-5.00
-10.00
-90 90
-120 120
-150 150
-180
XY Plot 8 HFSSDesign1
10.00 Curve Info
dB(GainTotal)
Setup1 : LastAdaptive
Freq='1.35GHz' Phi='0deg'
dB(GainTotal)_1
Imported
5.00 Freq='1.35GHz' Phi='0deg'
dB(GainTotal)_2
Imported
Freq='1.35GHz' Phi='0deg'
dB(GainTotal)_3
Imported
-0.00 Freq='1.35GHz' Phi='0deg'
dB(GainTotal)_4
Imported
Freq='1.35GHz' Phi='0deg'
-5.00
Y1
-10.00
-15.00
-20.00
-25.00
-200.00 -150.00 -100.00 -50.00 0.00 50.00 100.00 150.00 200.00
Theta [deg]
XY Plot 11 HFSSDesign1
10.00
5.00
0.00
-5.00
Y1
-10.00
-15.00
-20.00
-25.00
-30.00
0.00 125.00 250.00 375.00
Phi [deg]
4 Conclusion
A cuboid antenna array which is capable of beam steering in both azimuthal plane
and elevation plane is successfully presented. Proposed array takes advantage of its
cuboid geometry and achieves beam steering of 360° in azimuthal plane and 180° in
elevation plane without the need of any phase shifter circuit. Here beam steering is
achieved by radiating the antenna elements according to the required beam steering.
Each antenna element in the cuboid antenna array is fed with a coax feed which
eliminates the need of a feed network. Proposed cuboid geometry also helps in
Beam Steering Cuboid Antenna Array for L Band Radar 493
obtaining low level of mutual coupling among the antenna array elements. From the
obtained simulation results it can be observed that the proposed cuboid antenna array
can be used for the applications of tracking and surveillance.
References
1. Shu T (2011) Design considerations for DBF phased array 3D surveillance radar. In: IEEE CIE
international conference on radar, vol 1, pp 360–363
2. Wen-bin Gong (2010) DBF multi-beam transmitting phased array antenna on LEO satellite.
Acta Electron Sinica 38(12):2904–2909
3. Shi-Qiang F (2010) Broadband circularly polarized slot antenna array fed by asymmetric CPW
for L-Band application. IEEE Antennas Wirel Propag Lett 8:1014–1016
4. Doane JP, Sertel K, Volakis JL (2012) A 6.3:1 bandwidth scanning tightly coupled dipole
array with co-designed compact balun. In: Antennas and Propagation Society International
Symposium, July 2012
5. Ge L, Luk KM (2015) A three-element linear magneto-electric dipole array with beamwidth
reconfiguration. IEEE Antennas Wirel Propag Lett 14:28–31
6. Debogovic T, Bartoli ÄJ, Perruisseau-Carrier J (2014) Dual-polarized partially reflective sur-
face antenna with MEMS-based beamwidth reconfiguration. IEEE Trans Antennas Propag
62(1):228–236
7. Lafond O, Caillet M, Fuchs B, Himdi M (2010) Microwave and millimeter wave technologies
modern UWB. Antennas Equip. InTech
8. Khidre A, Yang F, Elsherbeni AZ (2013) Reconfigurable microstrip antenna with tunable
radiation beamwidth. In: 2013 IEEE antennas and propagation society international symposium
(APSURSI). Orlando, FL, pp 1444–1445
9. Bai YY, Xiao S, Tang MC, Ding ZF, Wang BZ (2011) WideAngle scanning phased array with
pattern reconfigurable elements. IEEE Trans Antennas Propag 59(11):4071–4076
10. Brookner E (2007) Phased-array and radar breakthroughs. In: 2007 IEEE radar conference,
Boston, MA, pp 37–42
An Intelligent Fault Location Algorithm
for Double Circuit Transmission Line
Based on DFT-ANN Approach
1 Introduction
The detection and location of faults in long transmission line plays a significant role
to make power transmission system more reliable. Fault location estimation is an
important feature in distance relays which is employed in the transmission system.
Through proper estimation of fault location, the amount of time required to attend the
fault location by line patrolling people will be reduced to a minimum. An improper
location of faults leads to reliability and stability issues in the transmission system. In
this context, many researchers and practicing engineers have come-up with numerous
algorithms to estimate fault location in transmission lines since many years. These
algorithms can be classified based on the parameters used as features; (a) based on
impedance measurements at relay location [1, 2], (b) based on differential compo-
nents measured at both the end of the transmission line [3] and (c) the traveling wave
theory which uses recorded signals of either end of the transmission line [4, 5]. Many
of these location algorithms are vacillating with under reach problem owing to high
impedance fault condition and overreach problem owing to DC offset component.
Traveling wave theory based fault location schemes have glitches at close-in fault
condition and also shunt fault with zero (approximate) fault inception angle.
In recent times, an intelligent technique adapted to the protection of power trans-
mission line is Artificial Neural Networks (ANN). This intelligent technique has
versatile usage in pattern recognition, clustering, classification, and generalization.
The neural network based techniques are easy to adapt to multifaceted problems in
power system applications since they can be erudite with recorded data at widespread
circumstances as it is having enough learning competency. ANNs shows outstand-
ing topographies for instance noise invulnerability, toughness and fault lenience.
Thus, the trip signal issued by an intelligent relay will not be conceited to discrepan-
cies of system constraints. Subsequently, An ANN-based faulty-phase identification
and distance location in a single circuit transmission lines was described in [6]. A
wavelet-fuzzy neural network based SLG fault location method was employed in an
industrial distribution system [7]. In [8], a fault distance locator for SLG faults for
double circuit transmission lines with ANN has been described. An ANN-based earth
fault classification in a double circuit transmission line is employed in [9]. A Non-
recursive fault location algorithm using two-terminal unsynchronized measurements
has been described in [10].The distance protection scheme in a paralleled transmis-
sion lines in case of ground fault has been investigated and compared with traditional
double circuit transmission line protection schemes in [11]. However, above stated
schemes have certain merits and demerits in providing proper protection mechanism
for transmission lines.
This paper presents an intelligent fault location based on DFT-ANN approach
to estimate fault location in a practical double circuit transmission line of Chhattis-
garh state transmission system using only one-end data with the existence of mutual
coupling. The results attained through simulations demonstrated that all the shunt
faults are located appropriately within acceptable error. The algorithm is impenetra-
ble to mutual coupling and diverse system circumstances. This intelligent algorithm
An Intelligent Fault Location Algorithm for Double Circuit … 497
will not entail on any communication platform to record neither remote-end nor
zero sequence data of neighboring lines. It has not been described previously for
estimation of fault location in existing power system network.
An existing 400 kV, 50 Hz double circuit transmission line was chosen from Chhat-
tisgarh state transmission system which is shown in Fig. 1 The network parameters
are given in Appendix: Table 4. The network comprises two power stations (Station-I
with 4 × 500 MW and Station-II with 3 × 210 MW), three double circuit transmis-
sion lines (towards Vindyachal with 215 km, Raipur/PGCIL-B with 220 km and
Bhilai/Khedamara with 198 km) and four single circuit transmission lines (towards
Korba-West with 17 km, Bhatpara with 100 km, BALCO with 45 km and Vandana
with 6 km) at bus-4 (KSTPS/NTPC). At bus-3 (Bhilai/Khedamara), two triple cir-
cuit lines (from Raipur/Raita with 65.68 km and another towards Bhilai 220 kV grid
bus), one double circuit line (from KSTPS/NTPC 198 km) and six single circuit lines
(from Korba-West with 212 km, Bhatpara with 90 km, towards Raipur/PGCIL-A with
20 km, Seoni with 250 km, Koradi with 272 km and Bhadrawati with 322 km) are
connected. In this paper, 198 km double circuit transmission line connected between
bus-3 and 4 (KSTPS/NTPC and Bhilai/Khedamara) is selected and the complete
network is modeled in MATLAB/Simulink software to perform simulation studies
for various types of shunt faults at a different location with different fault parameters.
The proposed distance location scheme is located at bus-4 (KSTPS/NTPC).
The proposed fault location algorithm deals with all types of shunt faults which
occur in double circuit transmission lines. Herein four intelligent fault locators (DFT-
ANN modules) have been designed separately to locate all 10 types of shunt faults.
With the help of these DFT-ANN modules, the location of all types of shunt faults
can be estimated by using one terminal data only. The flowchart of the proposed
fault location algorithm has been depicted in Fig. 2. The simulation studies have
been conducted in MATLAB/Simulink environment at different fault scenarios to
demonstrate the robustness of the proposed intelligent technique.
498 V. Ashok et al.
220km
CT Ranchi(408km)
170km
(2x685MW)
50km
212km
100km
220km
FD
148km
Towards JPL
148km
Bus-14 KWB(220kV) 125MVAr
(3X250MW)
(Marwa) Bus-13
148km
Fault (GMR)
Bus-5
140km
40km
Towards
(Bhatapara-PGCIL)
JPL(220kV) Towards
140km
40km
Towards Durg(765kV)
170km
Marwa(220kV) Bus-10
198km
(JPL)
Towards Bus-12
50MVAr Bhatapara(220kV) (Durg)
254km
198km
Towards
254km
50MVAr 125MVAr
25km
Bhadrawati(345km)
50MVAr
25km
Bus-2
50MVAr
(Raipur-Riata) 90km
2X125MVAr 50MVAr
65.68km
50MVAr 80MVAr
65.68km
Towards 50MVAr
Raita(220kV)
65.68km
Bus-7 Bus-8
(Raipur-PGCIL_B) (Raipur-PGCIL_A)
225km
50MVAr
225km
TCSC
Towards Towards
BSP
Warda(363km) Raipur_PGCIL-A(220kV)
(2x250MW)
225km
Towards
225km
Bus-3 Raigarh-PGCIL(220kV)
(Bhilai-Khedamara)
14km
14km
50MVAr 63MVAr
50MVAr 63MVAr
Towards
Towards Bhilai(220kV)
Seoni(250km) Bus-6 Bus-9
(BSP-NSPCL) (Raigarh)
Towards
Koradi(272km) 125MVAr
Towards Towards Towards
20km Bhilai_Pooling(220kV) Rourkela(220km) 4X50MVAr 80MVAr 63MVAr
Bhadrawati(322km)
Anti-aliasing Filter
DFT-ANN1
DFT-ANN2
FAULT LOCATION
DFT-ANN3
DFT-ANN4
Std W W
Fault
+ +
values
of V&I Distance
(km)
(D/C)
b b
X Ia1, Ib1, Ic1, Va1, Vb1, Vc1, Ig1, Ia2, Ib2, Ic2 (1)
Y1 [L] (2)
After the design of input/output datasets with expedient features for the training
of DFT-ANN module, foremost thing is designing of appropriate ANN architec-
ture by electing the number of neurons per hidden layer and number of hidden
layers arbitrarily; and should be ensured whether the network structure is properly
normalized/generalized for minimum training error. Since there are no empirical
formulas/standards available for choosing a number of hidden layers/neurons partic-
500 V. Ashok et al.
Table 1 Parameters are used to generate dataset for erudition of DFT-ANN module
Parameter Training Testing
Fault type LG: A1G, B1G, C1G, LG: A1G
LLG: A1B1G, B1C1G, LLG: A1B1G
A1C1G, LL: A1B1
LL: A1B1, B1C1, A1C1, LLLG: A1B1C1G
LLLG: A1B1C1G
Fault location (Lf ) (2–196) km line in steps of (1–197) km line in steps of
2 km 2 km
Fault inception angle (i ) 0°, 90° and 270° 0°, 90° and 270°
Fault resistance (Rf) 0, 50 and 100 0, 50 and 100
No. of fault cases LG: 2646 LG: 891
LLG: 2646 LLG: 891
LL: 2646 LL: 891
LLLG: 0882 LLLG: 891
ularly, the training architecture is determined heuristically. So, the number of layers
and the number of neurons per hidden layer were selected by random experimen-
tation of 5, 10, …, 50 neurons. Then the activation function was generally used,
for instance, tansig, and purely with the Levenberg–Marquardt (LM) algorithm to
train DFT-ANN modules separately. Finally, total four DFT-ANN modules have been
designed/trained to locate all common shunt faults; DFT-ANN1 for SLG (AG, BG,
CG) faults, DFT-ANN2 for LLG (ABG, ACG, BCG) faults, DFT-ANN3 for LL (AB,
AC, BC) faults and DFT-ANN4 for LLL (ABC/ABCG) faults. All 10 types of shunt
faults including LG, LLG, LL, and LLLG have been simulated in the three-phase
double circuit transmission line at different fault locations with change in fault resis-
tance (0, 50, 100 ) and fault inception angles (0°, 90°, 270°) to ensure accuracy
of fault locator. The hidden architecture and corresponding training parameters have
been reported in Table 2.
The fault locator based on DFT-ANN approach has been tested with widespread data
sets comprising of different fault scenarios at different fault locations which have
not been used before in erudition. To examine the impact of fault parameters on the
An Intelligent Fault Location Algorithm for Double Circuit … 501
performance of the proposed intelligent algorithm, it has been tested with a separate
data set that epitomizes wide range of fault circumstances. The trained DFT-ANN
modules have been tested and performance is corroborated by simulating different
fault cases with a change in fault locations (Lf 1–197 km in steps of 2 km), fault
resistances (Rf 0, 50, 100 ) and fault inception angles (i 0°, 90°, 270°). The
estimated error for fault location “E” is calculated in percentage with respect to the
total line length Lt using (3). From the test results for different fault cases depicted in
Table 3, it can be observed that DFT-ANN modules properly locate all faults and the
mean error in percentage for 99 different locations along the line is also summarized
in this table. Further Fig. 3 illustrates the test results of DFT-ANN3 module in case
of A1B1 fault with % error in estimation of fault location at different faulty-point
throughout the line by presuming Rf 0 and i 0°.
%Error Lf (Actual) −Lf (Estimated) / Lt (Line Length) × 100 (3)
Table 3 (continued)
DFT-ANN Type of fault Fault resistance Fault inception Mean error (%)
module () angle (°) (99–locations)
50 0 −7.9797e−4
50 90 −0.0013
50 270 −0.0014
100 0 0.0029
100 90 1.4826e−4
100 270 6.0940e−5
DFT-ANN4 A1B1C1G 0 0 −3.4474e−4
0 90 −6.002−4
0 270 −5.9889−4
50 0 0.0029
50 90 0.0015
50 270 0.0014
100 0 −0.0011
100 90 −5.6927e−4
100 270 −4.9600e−4
Fig. 3 Test results of DFT-ANN3 module in case of A1B1 Faults with Rf 0 & i 0°
An Intelligent Fault Location Algorithm for Double Circuit … 503
5 Conclusion
Acknowledgements The authors acknowledge the financial support of Central Power Research
Institute, Bangalore for funding the project. No. RSOP/2016/TR/1/22032016, dated: 19.07.2016.
The authors are thankful to the Head of the institution as well as Head of the Department of Electrical
Engineering, National Institute of Technology, Raipur, for providing the research facilities to carry
this research project. The authors are grateful to the local power utility (Chhattisgarh State Power
Transmission Company Limited) for their cooperation in providing valuable data to execute research
work.
Appendix
See Table 4.
Table 4 (continued)
Generator Transmission line
Parameter KSTPS-I KSTPS-II Parameter KSTPC to
Khedamara
Xd11 (pu) 0.212 0.185 – –
Xq11 (pu) 0.233 0.147 – –
Td01 (s) 6.69 4.8 – –
Tq01 (s) 2.5 0.5 – –
Td011 (s) 0.038 0.0437 – –
Tq011 (s) 0.05 0.141 – –
H (s) 3 4.129 – –
References
1. Sachdev MS, Agarwal R (1988) A technique for estimating transmission line fault locations
from digital impedance relay measurement. IEEE Trans Power Deliv 3(1):121–129
2. Mazon AJ et al (1995) New method of fault location on double-circuit two-terminal transmis-
sion lines. Electr Power Syst Res J 35(3):213–219
3. García-Gracia M, Osal W, Comech MP (2007) Line protection based on the differential equation
algorithm using mutual coupling. Electr Power Syst Res J 77(5–6):566–573
4. Thomas DW, Carvalho RJ, Pereira ET (2003) Fault location in distribution systems based on
traveling waves. In: Proceedings of IEEE PowerTech conference, vol 2, pp 1–5
5. Thomas DWP, Christopoulos C, Tang Y, Gale P, Stokoe J (2004) Single ended traveling wave
fault location scheme based on wavelet analysis. In: Proceedings of IEEE international confer-
ence on development in power system protection, vol 1, pp 196–199
6. Jain A, Kale VS, Thoke AS (2006) Application of artificial neural networks to transmission
line faulty phase selection and fault distance location. In: Proceedings of the IASTED interna-
tional conference on energy and power system, Chiang Mai, Thailand, Paper No. 526–803, pp
262–267
7. Chunju F, Li KK, Chan WL, Weiyong Y, Zhaoning Z (2007) Application of wavelet fuzzy
neural network in locating single line to ground fault (SLG) in distribution lines. Int J Electr
Power Energy Syst 29(6):497–503
8. Jain A, Thoke AS, Patel RN (2009) Double circuit transmission line fault location using artificial
neural network. CSVTU J, Bhillai, C.G. India, 2(1):40–45
9. Jain A, Thoke AS, Patel RN (2009) Classification of single line to ground faults on double
circuit transmission line using ANN. Int J Comput Electr Eng (IJCEE) 1(2):199–205
10. Izykowski J, Rosolowski E, Balcerek P, Fulczyk M, Saha MM (2011) Accurate noniterative
fault-location algorithm utilizing two-end unsynchronized measurements. IEEE Trans Power
Deliver 26(02):547–555
11. Jia K, Bi T, Li W, Yang Q (2015) Ground fault distance protection for paralleled transmission
lines. IEEE Trans Indust Appl 51(06):110–119
Assessment of EO-1 Hyperion Imagery
for Crop Discrimination Using Spectral
Analysis
Abstract This paper outlines the research objectives to discriminate crop species
using pure spectral-spatial reflectance of EO-1 Hyperion imagery. Vigorous
encroachment in remote sensing unlocks the new avenues to investigate the hyper-
spectral imagery for analysis and implication for crop-type classification and agricul-
tural management. The investigated crop species were namely Sorghum, Wheat, and
cotton located in West zone of Aurangabad, Maharashtra, India. The preprocessing
algorithm namely quick atmospheric correction (QUAC) was applied to calibrate
bad bands and construct precise data for crop discrimination. The machine learning
classifiers applied to identify the pixels having a significant difference in pure spec-
tral signatures based on Ground Control Point (GCP) and image spectral responses.
The investigation was based on a binary encoding (BE) and support vector machine
1 Introduction
Fig. 1 The acquired hyperion false color composite (45-33-20) and subset covering the studied
area
3 Processing Mechanism
In current research Hyperion space-borne sensors used with spectral, spatial and
radiometric properties. Earth Observing-1 (EO-1) Hyperion sensor mission is unique,
with 220 contiguous hyperspectral bands. Out of 220, 196 bands were well calibrated
508 R. R. Surase et al.
and 24 bands were considered as noisy bands as it contains no any information. The
study area visited for a second time, based on the existing GCP based on vegetation
maps, and again visited in Feb 2015 in order to make samples homogenous, appro-
priate for the assessment. Table 1 provides the details parameters about Hyperion
parameters used for a research study with spectral, spatial coverage information; the
acquisition date was 25 Dec 2015 collected from USGS site.
Hyperion imagery was acquired in the form of Geographic Tagged Image-File
Format (GeoTIFF) and Hierarchical Data Format (HDF) file format. Level 0 and
level 1 product were available for data processing [4].
As per the Sect. 3.1, the product of level 1 supplies the DN in 16 bit signed integer
radiance values with a range of −32767 to +32767. These scale factors were 40 and
80 for VNIR band and SWIR band resp. These factors were used for division of each
band of VNIR (1-70) and SWIR (71-242). So, translation of DN value to radiance
value can be solved by Eq. (1) [5].
π Lλd 2
Rad to Reflectance (2)
cos θ ∗ ESUNλ
Here, Lλ is the spectral radiance sensor (W m–2 sr–1 μm–1 ), and d is Earth to Sun
distance, θs is astral zenith angle, ESUNλ is mean astral atmospheric irradiance. The
standards of Sun-altitude angle and midpoint wavelength were essential for QUAC
[6]. This approach is based on 3 ladders including the assortment of wholesome
spectral responses from imagery, estimation of baseline and reflectance can be solved
using given Eq. 3 with n signifies a number of end members.
The binary encoding classification algorithm encodes the end member data in the
form of 0 and 1 s. The ex-or function works to compare encoded reference spectral
response curve with encoded data spectral response curve [6]. We consider regions
rather than individual pixels and our code for an image region is,
BE 2L + 28 bits, (4)
where L is the total amount of spectral channels of hyperspectral imagery. The code
comprises the spectrum, size, shape, and height. The slop and amplitude of the
510 R. R. Surase et al.
Fig. 2 Atmospherically corrected objects using QUAC (settlement area, hilly rocks, crops, plants,
and water respectively)
spectrum are symbolized by 2L bits, the size, and shape of the fragment is hinted by
25 bits and the relative height of a segment is represented by 3 bits.
EO-1 Hyperion image was orthorectified using level −1T in the form of both radio-
metric and geometric correction to minimize cloud cover 0–9%. To remove the bad
column bands QUAC was applied are shown in Fig. 2 for crop, rock, settlement,
plants, and water body. The preprocessed end members were considered for crop
species discrimination with some land cover classes. The field area was revisited
again in December 2015 and collected 304 ground control points (GCP) for map-
ping with end members.
Figure 3 shows classified the image of crops including sorghum, wheat, and cotton
with the land cover objects like waterbody, built up area, forest and dense forest
corresponding to hyperspectral imagery.
Figure 4 illustrated the results attained by applying BE with ROI created using
reference points. By analyzing the image, it was concluded that sorghum, wheat,
and cotton crops were classified with some mixing pixels based on a number of
spectrum channels. To overcome that lacuna SVM with a varying polynomial degree
was applied and 3rd degree gives better classification performance shown in Fig. 4.
End members created using ROI was imposed for accuracy assessment in the form
of X and Xi as a number of spectral responses.
Table 2 signifies classification accuracy assessment using BE technique. GCP for
seven classes were created using 304 points and 223 were correctly classified with
73.35% accuracy. The highlighted portion shows a correlation of classes based on
reference points and end member spectra. Sorghum crop gives the highest number of
correlated pints compared to sorghum and cotton cop species. The overall accuracy
512 R. R. Surase et al.
This paper deals with the investigation of remotely sensed data recorded using
EO-1 Hyperion satellite contains 242 bands with 400–2500 nm bandwidths. The
Aurangabad region was targeted because of availability of data and need to estimate
the agriculture land. QUAC was applied to remove bad bands for further classifica-
tion. The results of binary encoding technique were with OA 73.35% and KC 0.68,
the highest accuracy found in support vector machine with third-degree polynomial
kernel function with the OA 90.44% and KC 0.84. Crop type classification was
Table 2 Overall accuracy assessment using binary encoding technique
Binar encoding Sorghum Wheat Cotton Water Built_Up Dense_forest Forest C O PA UA
Sorghum 88.10 13.3 26.6 0.0 1.96 2.00 0.00 33.33 57.14 42.86 66.67
Wheat 2.38 76.6 20.0 0.0 0.00 0.00 0.00 29.03 26.67 73.33 70.97
Cotton 7.14 10.0 53.3 0.0 0.00 0.00 0.00 63.16 53.33 46.67 36.84
Waterbody 0.00 0.00 0.00 98.04 0.00 0.00 0.00 5.66 1.96 98.04 94.34
Built_Up 0.00 0.00 0.00 0.0 98.04 0.00 0.00 31.67 19.61 80.39 68.33
Dense_Forest 0.00 0.00 0.00 0.0 0.00 98.00 0.00 0.00 6.00 94.00 100
Forest 2.38 0.00 0.00 0.0 0.00 1.96 100 35.42 38.00 62.00 64.58
OA 73.35%
Assessment of EO-1 Hyperion Imagery for Crop Discrimination …
513
514
successfully obtained using pure spectra collected through GCP. The future inven-
tions will be the use of advanced high-resolution data for crop type monitoring. Also,
future directions for current research will be to solve mixing pixel problem using
pure spectra of all crop species which will be created using Spectroradiometer for
providing more accurate results.
Acknowledgements Authors would like to acknowledge for providing partial technical support
under DST-FIST, UGC SAP-(II), DRS (Phase-II), and NISA to Dept of CS & IT, Dr. Babasaheb
Ambedkar Marathwada University, Aurangabad, MS, India and moreover thanks for economic
support underneath UGC-BSR research fellowship for this research exertion. The author also would
like to thanks, Miss. Sonali L. Ingle and Mr. Amol Vibhute for support of GCP collection.
References
1. Singh D, Singh R (2015) Evaluation for EO 1 hyperion data of crop studies in part of Indo-
Gangetic plains-a case study of Meerut district
2. Leverington DW (2008) Mapping surface cover using EO-1 hyperion data: ongoing studies in
Arid environments
3. https://wikipedia.org/wiki/Aurangabad_district,_Maharashtra
4. Vibhute AD, Kale KV, Dhumal RK, Mehrotra SC (2015) Hyperspectral imaging data atmo-
spheric correction challenges and solutions using QUAC and FLAASH algorithms. In: Interna-
tional conference on man and machine interfacing
5. Pervez W, Khan SA Valiuddin M (2015) Hyperspectral hyperion imagery analysis and its appli-
cation using spectral analysis. In: The international archives of photogrammetry, remote sensing
and spatial information sciences, Germany, vol W2
6. Mazer AS, Lee M et al (1988) Image processing software for imaging spectrometry analysis.
Remote Sens Environ 24(1):201–210
7. Surase RR, Kale KV (2015) Performance evaluation of support vctor machine and maximum
likelihood classifier for multiple crop classification. Int J Remote Sens Geosci (IJRSG) 4(1)
8. Surase RR, Kale KV (2015) Multiple crop classification using various support vector machine
kernel functions. IJERA 5(1)
9. Varpe AB, Rajendra YD, Vibhute AD, Gaikwad SV, Kale KV (2015) Identification of plant
species using non-imaging hyperspectral data (MAMI). IEEE, pp 1–2
Drought Severity Identification
and Classification of the Land Pattern
Using Landsat 8 Data Based on Spectral
Indices and Maximum Likelihood
Algorithm
Abstract The manual survey of drought severity is very hectic and time-consuming
task. This paper reports the study to assess the adeptness of satellite-based drought
indices for observing the spatiotemporal extent of agricultural drought events. The
Land Use Land Cover (LULC) has been categorized into six classes such as Veg-
etation, Settlement, Barren land, Harvested land, Hill with rocks and Water bodies
and computed using Maximum Likelihood (ML) supervised algorithm. Moreover, an
attempt has been made to analyze the drought condition using multi-date Landsat 8
images of Vaijapur taluka which falls in drought-prone zones. The severity of drought
was determined and defined based on the Normalized Difference Vegetation Index
(NDVI) with a good outcome. The drought severity was classified into three groups
viz severe, moderate, and normal. The present study shows that the entire study area
was affected by the worst drought condition during the period of 2013 and 2014. The
experimental results examine that, the overall accuracy of ML classifier was 81.31%
with kappa coefficient 0.81 for the year 2013 and it was 78.02% with a kappa value
of 0.73 for the year 2014. The present study is essential for the assessment of drought
condition with advanced technology before the drought get severe.
1 Introduction
Drought is complex and damaging natural disaster known to the world. This has a
significant impact on a various sectors like the economy, ecology, social life, industry,
and agriculture [1–7]. Drought is classified into four categories viz meteorological,
hydrological, agricultural, and socioeconomic drought. The meteorological drought
is occurring due to a lack of precipitation over a region for a long period of time
[8]. The lower precipitation leads to the hydrological and agricultural drought. The
hydrological drought is having an impact on water resources like declining flow
of water in the river, lake, reservoir, and streams [9, 10]. The agricultural drought
is related to reduce soil moisture which is not enough to sustain the health of the
crop. The economy of the various countries is depending upon the agribusiness.
The Socioeconomic drought is defined on the basis of the gap between demand and
supply of economic goods which is responsible for the societal imbalance [6, 11].
Traditionally, the drought monitoring has been carried out by rainfall data col-
lected by local weather stations. The rainfall measurement was considered for a clus-
ter of the village. The cluster can include 10 or more villages. The government of
India has installed the weather station at tehsil and district level but it has a limitation
of spatial coverage [11]. The satellite remote sensing is having worldwide coverage
which provides data in a frequent manner. The drought monitoring and forecast-
ing model utilizes satellite data along with ground observation detail to forecast the
drought warning and risk assessment. The satellite image-based drought indicators
like NDVI, VCI (Vegetation Condition Index), and SAVI (Soil Adjusted Vegetation
Index), and TCI (Temperature Condition Index) were used to assess the health of
vegetation, soil moisture condition and temperature profile of the geographic location
[12, 13].
Drought Severity Identification and Classification of the Land … 519
In this present research study, we have chosen Vaijapur tehsil of Aurangabad dis-
trict, Maharashtra, India. Vaijapur is a known as the gateway to Marathwada region.
The average precipitation is reported to be 500.20 mm. The study area has average
temperature 34–42 °C. The total area of Taluka is 1,54,378 ha out of which 1,21,830
ha area falls in the agricultural sector. The Onion, Sugarcane, Jawar, Bajra, Corn, is
the main crop whereas cotton is a major cash crop. The economy of the study area
depends upon agriculture and associated business.
The Landsat 8 satellite images of the month June, July, August, and September of
Kharif season of the year 2013–2014 has used for experimental analysis. The Landsat
8 dataset were obtained from USGS in November 2014. The image is Orthorectified
and geometrically corrected by USGS. The Landsat scenes are processed to standard
level-1 precision terrain corrected (L1T) product. It is packaged as Geographically
tagged image file format (GeoTIFF). The package includes 13 files, in which 11
band images, Quality Assurance file, and metadata file. A Quality Assurance (QA)
file includes the cloud, terrain shadow, and data artifact. The band 1–9 is designated
to OLI and 10–11 bands to the TIRS sensor. The spatial resolution of the image is
15 m panchromatic, 30 m multispectral, and 100 m Thermal data which is registered
to the OLI sensor data in order to create level 1 T product. The Geomatica soft-
ware with Atmospheric CORrection (ATCOR) plug-in has used for preprocessing
of Landsat 8 image data. The Erdas Imagine 2014 software is used for data process-
ing, spatial feature extraction, vegetation indices computation and classification. The
ESRI ArcGIS 10.2.3 software was used to generate a spatial map of study area.
3 Methodology
The vegetation absorbs the photosynthetically active radiation (PAR) spectral region
such as visible spectrum region of electromagnetic spectrum. The absorbed energy
is used for the process of photosynthesis. The pigment of the green leaf such as
chlorophyll absorbed the solar radiation of wavelength 0.4–0.7 µm, and re-emit in
NIR region from 0.7 to 1.1 µm which is nearly half of total absorbed solar energy [13,
14]. The damaged or dry leaves cannot absorb the solar radiation due to the absence
or less chlorophyll so that it cannot re-emit the solar energy. The high intensity of
re-emitted solar energy shows the healthy condition of the leaf. The NDVI (Eq. 1) is
a very worldwide popular index for vegetation health analysis. The NDVI classified
the image into −1 to +1 value, where as negative values shows non-vegetation, and
positive value shows vegetation [14].
NIR − RED
NDV I (1)
NIR + RED
The LULC (Land use land cover) analysis was done by Maximum Likelihood (ML)
supervised classification approaches. The ML classifier is based on Bayes theorem
which is used for decision-making process. The other approach is the cells in each
cluster sample in the multidimensional space being normally distributed. The classi-
fier considers variance and covariance of the class signatures during assignment each
cell to the likelihood classes. The maximum probability pixel would be assigned to
the most probable class after calculating the probability in each class or tagged as
an “unknown” if the probability values are all below a threshold [15]. The analysis
of maximum likelihood classifier can be implemented using Eq. (2) for the satellite
imagery.
−1
1 1
p(X /Cj ) n 0.5
× exp − (DN − μj ) T
DN − μj , (2)
(2π ) 2 j 2 j
where p X/Cj is the conditional probability of observing X from class Cj (proba-
bility density function, µj (DN1 , DN2 , DN3 , . . . , DNn )T is the vector of the pixel
with n number
of bands µj (µj1 , µj2 , µj3 , . . . , µjn )T is the mean vector of the class
Cj and j is the covariance matrix of class Cj .
Drought Severity Identification and Classification of the Land … 521
Total eight images of Kharif season of 2013 and 2014 were preprocessed using
ATCOR atmospheric correction module, then subset was generated using Region
Of Interest (ROI) as an administrative boundary. The False Color Composite (FCC)
image has generated using band (Red-5, Green-4, Blue-3) for visual analysis of
spatial features. The image of September month of the year 2013 and 2014 was used
for Land use and Land cover classification using ML classifier. The ML classifier has
trained with pure pixel of respective classes like Vegetation, Settlement, Barren land,
Harvested land, Hill with rocks and Water bodies. The Fig. 1 shows the classified land
use, a land cover map which clearly depicts that green vegetation in 2013 is nearly
half of 2014. The green vegetation area is lower down in 2013 due to unavailability
of soil moisture. The result indicates that harvested land has increased in the year
2013 because the most of crops were damaged or water stressed the due absence of
surface and groundwater. The Figs. 2 and 3 illustrate the vegetation health in the year
2013 and 2014 respectively. The vegetation health is categories into three groups,
such as severe, normal, and healthy on the basis of NDVI values.
In the month of June, the rainfall was much below average monthly rainfall. The
entire tehsil was suffering from a drought condition in the month of June and July.
Due to further lack of precipitation in the month of July, the available crop has been
badly affected by lack of water. The entire Kharif season of 2013 has received less
Fig. 1 Land use land cover (LULC) of the year 2013 and 2014 of the study area
522 S. V. Gaikwad et al.
Fig. 2 Land use land cover (LULC) image of Vaijapur of the year 2013 and 2014
rainfall than the year 2014. The health of the vegetation has degraded from June to
September. In the month of August 2013, the total 7639.47 ha area was affected by a
severe drought condition. In the Kharif season of 2014, the month of June, and July
month did not receive sufficient rainfall for sowing activity. The 9321.21 ha area was
affected by severe drought conditions in June. Kharif season of 2014 has faced high
severe condition due to scanty rainfall. The severe condition has decreased from June
to September month.
The total numbers of 91 ground truth points were collected into study area region.
The accuracy of the classified data was assessed with error matrices, producers,
user’s accuracies, and Kappa statistics are shown in Table 1. The results of this study
Drought Severity Identification and Classification of the Land … 523
Table 1 Error matrix resulting from classifying training sets pixels (ML classifier)
Classes Ground truth (Pixels)
Vegetation Harvested Settlement Barren Hill with Water Total
land land rocks body
2013 2014 2013 2014 2013 2014 2013 2014 2013 2014 2013 2014 2013 2014
Vegetation 22 21 2 3 0 0 2 2 0 0 0 0 26 26
Harvested land 1 2 9 8 0 0 1 1 0 0 0 0 11 11
Settlement 0 0 0 0 23 24 4 2 3 4 0 0 30 30
Barren land 1 1 2 2 0 0 5 4 0 1 0 0 8 8
Hill with rocks 0 0 0 1 1 1 0 0 8 7 0 0 9 9
Water body 0 0 0 0 0 0 0 0 0 0 7 7 7 7
Total 24 24 13 14 24 25 12 9 11 12 7 7 91 91
PA (%) 91.67 87.05 69.23 57.14 95.83 96 41.67 44.44 72.73 58.33 100 100
UA (%) 84.62 80.77 81.82 72.73 76.67 80 62.50 50 88.89 77.78 100 100
Overall accuracy-2013 81.31, 2014 78.02, Kappa Value-2013 0.811,
2014 0.73
5 Conclusions
The agriculture sector is highly dependent on rainfall. The late arrival of monsoon
rainfall in June month was responsible for getting smaller area available for crops
in Vaijapur tehsil. If there is not enough soil moisture present in the soil for sowing
activity, then the farmer has to wait for retrieval of sufficient rainfall for sowing
operation. The Landsat 8 images were used for analysis of agricultural drought
severity based upon drought indices. The LULC mapping was analyzed by supervised
maximum likelihood algorithm through Landsat 8 satellite data. It is observed that
Maximum Likelihood classifier has successfully classified Vegetation, Harvested
land, Hill with rocks, Settlement, Barren land, and Waterbodies. The maximum
likelihood method estimates the optimum parameters using unified approach and
works well for well-defined distribution. The NDVI has indicated that the area has
affected by the high severity of drought in the year 2013 as compared to the year
2014. The research study has investigated high drought severity condition in the
study area.
524 S. V. Gaikwad et al.
Acknowledgements The authors would like to acknowledge and thanks to University Grants Com-
mission (UGC), India for granting UGC SAP (II) DRS Phase-I & Phase-II F. No. 3-42/2009 &
4-15/2015/DRS-II for Laboratory facility to Dept of CSIT, Dr. B.A.M. University, Aurangabad,
Mah, India and financial support under UGC BSR Fellowship for this research study.
References
1. Wilhite D (Ed.) (2000) Drought: a global assessment. In: Routledge hazards and disasters
series, vols I & II, Routledge, London
2. Carroll N, Frijters P, Shields MA (2009) Quantifying the costs of drought: new evidence from
life satisfaction data. J Populat Econ 22(2):445–461. https://doi.org/10.1007/s00148-007-01
74-3
3. Van Vliet MTH, Yearsley JR, Ludwig F, Vogele S, Lettenmaier DP, Kabat P (2012) Vulnerability
of US and European electricity supply to climate change. Nature Clim Change 2(9):676–681.
https://doi.org/10.1038/nclimate1546
4. Garcia-Herrera R, Daz J, Trigo RM, Luterbacher J, Fischer EM (2010) A review of the european
summer heat wave of 2003. Crit Rev Environ Sci Technol 40(4):267–306
5. Lewis SL, Brando PM, Phillips OL, van der Heijden GMF, Nepstad D (2011) The 2010 Amazon
drought. Science 331(6017):554. https://doi.org/10.1126/science.1200807
6. Gaikwad SV, Kale KV, Kulkarni SB, Varpe AB, Pathare GN (2003) Agricultural drought
severity assessment using remotely sensed data: a review. Int J Advanc Remote Sens GIS
4(1):1195–1203, Article ID Tech-440 ISSN 2320-0243
7. Van Loon AF, Laaha G (2015) Hydrological drought severity explained by climate and catch-
ment characteristics. J Hydrol 526:3–14
8. Mishra AK, Singh VP (2010) A review of drought concepts. J Hydrol 391:202–216. https://d
oi.org/10.1016/j.jhydrol.2010.07.012
9. Panu US, Sharma TC (2009) Analysis of annual hydrological droughts: the case of northwest
Ontario, Canada. Hydrol Sci J 54(1):29–42. https://doi.org/10.1623/hysj.54.1.29
10. Sharma TC, Panu US (2014) Modeling of hydrological drought durations and magnitudes:
experiences on Canadian stream flows. J Hydrol Region Stud 1:92–106
11. Singh Ramesh P, Roy Sudipa, Kogan F (2003) Vegetation and temperature condition
indices from NOAA AVHRR data for drought monitoring over India. Int J Remote Sens
24(22):4393–4402. https://doi.org/10.1080/0143116031000084323
12. Gaikwad SV, Kale KV, Dhumal RK, Vibhute AD (2015) Analysis of TCI index using Landsat8
TIRS sensor data of Vaijapur region. Int J Comput Sci Eng 03(08):59–63
13. Gaikwad SV, Kale KV (2015) Agricultural drought assessment of post monsoon season of
Vaijapur Taluka using Landsat8. Int J Res Eng Technol 04(04):405–412
14. Monica C, Schott John R, John M, Nina R (2014) Development of an operational calibration
methodology for the landsat thermal data archive and initial testing of the atmospheric com-
pensation component of a land surface temperature (LST) product from the archive. Remote
Sens 6:11244–11266. https://doi.org/10.3390/rs61111244
15. Vibhute AD, Dhumal RK, Nagne AD, Rajendra YD, Kale KV, Mehrotra SC (2016) Analysis,
Classification and estimation of pattern for land of Aurangabad region using high-resolution
satellite image. In: Proceedings of the second international conference on computer and com-
munication technologies. Springer India, pp 413–427
Spectral Feature Extraction
and Classification of Soil Types Using
EO-1 Hyperion and Field
Spectroradiometer Data Based on PCA
and SVM
1 Introduction
Under this constraint, an effort has been made on to extract soil features and
classifies the surface soil types using level 1T Hyperion data of an agricultural site
of Kanhori region, Phulambri Taluka of Aurangabad district.
The acquired non-imaging raw spectra of soils were imported in View Spec Pro
(6.0.11) software for averaging the similar spectra and renovated it. The renovated
spectra were latterly imported into Environment for Visualizing Images (ENVI)
software as reference spectra (with reference pixels) for training the HRS imaging
data which was used for classification and generation of thematic map. The overall
528 A. D. Vibhute et al.
methodology was implemented in ENVI 5.1 software and ARC GIS 10 software for
data visualization.
First, the Hyperion image was converted into ENVI standard format using ‘Hyperion
tools toolkit’. Uncalibrated and water vapor bands were identified and removed from
the Hyperion image for further processing [8]. The stable subset of remaining 155
key bands with their spectral wavelengths is listed in Table 1.
The 155 key bands were used for the digital number to radiance conversion through
two scaling factors such as 40 and 80 for VNIR and SWIR respectively. After con-
version the image in radiance it was converted into visible reflectance. The QUAC
atmospheric algorithm was implemented on it due to atmospheric layers issue [8].
Additionally, QUAC algorithm does not require any ancillary metadata [10].
After atmospheric correction of the Hyperion image, PCA algorithm was imple-
mented to reduce the dimensionality of huge Hyperion data. The PCA algorithm is
also known as Karhunen–Loeve transform [11] which works on vastly correlated
nearest bands of the hyperspectral image with maximum original information of
the pattern [5]. The features are obtained from the original dataset on the basis of
vectors which are resulting from the Eigenvectors parallel to the Eigenvalues of the
covariance matrix [12]. After using PCA algorithm on the 155 bands, new 155 PCs
were generated. The dimensionality of the data was examined by Eigenvalues and
it was seen that first three PCs have the highest Eigenvalues than rest of the bands.
The first three PCs have more than 98% of the information among the 155 bands.
The transferred image was used for the classification.
The SVM non-parametric supervised machine learning algorithm was chosen due
to its high accuracy for classification of huge dimensional, heterogeneous and noisy
hyperspectral data with less training pixels [13]. The SVM method is originally
formulated by Vapnik [14] in 1995 based on statistical learning theory. Only support
Spectral Feature Extraction and Classification of Soil … 529
vector pixels have been considered and other training pixels were ignored in SVM-
based classification. Therefore, high accuracy can be gained through less training
samples [7]. The RBF kernel of SVM method was used in the present study. An
accuracy assessment was examined using error matrix with ground truth points. The
classified label against actual ground observation at the specified location can be
determined by error matrix. The nondiagonal nonzero value shows the error between
the classified features from corresponding observation [15, 16].
The false color composite was generated from the Hyperion data (Red-51, Green-
30, and Blue-20) for better visual illustration. The resampling method was Nearest
Neighbor and datum was World Geodetic System (WGS)-84 rectified by Universal
Transverse Mercator (UTM) zone 43 North.
The Hyperion image was then processed through PCA for dimensionality reduc-
tion and first three PCs were analyzed and described accordingly. According to labo-
ratory analysis of soil physicochemical attributes, reflectance spectra of soil samples
obtained by ASD, and PCA results, one major (black cotton soil or “Regur”) and
two minor (Lateritic soil and Sand dunes) soil types were identified and classified
with other two land features. The black cotton soil includes vertisol, inceptisol, and
entisol, lateritic soil includes alfisol, sand dunes includes arenosols or typic tor-
ripsamments. The image spectra were compared with spectra obtained from field
spectra for each soil surface feature. The spectral features of soils were extracted
on the basis of spectral signature characteristic of non-imaging reflectance spectra,
ground reference data and compared with HRS imaging data. The calculated physic-
ochemical attributes of soils which were geolocated while collecting the samples,
spectral reflectance characteristics of geolocated non-imaging data and ground ref-
erence points with visual inspection of soil sampling sites were used in training of
Hyperion data. The ROIs were developed by said reference points which were the
support vectors in the SVM based classification to train and test the Hyperspectral
imagery for classification.
Five soil classes according to USDA soil taxonomy [17] were detected and classi-
fied on the basis of the report generated by laboratory analysis of soil physicochemical
properties, soil spectral reflectance characteristics by non-imaging reports and imag-
ing reflectance spectra. The classes were vertisol, inceptisol, and entisol of black
cotton soil, alfisol of lateritic soil, arenosols or typic torripsamments of sand dunes,
vegetations and settlements. The spectra of soil surface features were distinguished
accordingly the spectral reflectance properties within the specified spectrum range.
The soil physicochemical attributes like soil water (moisture) contents, soil organic
matter, soil Fe contents and soil clay contents are main factors to decrease the spec-
tral reflectance of soils [2]. However, the absorption peaks of water (moisture and
hydroxyl ions) were found at 1400–1450 nm and 1900–1950 nm [18].
530 A. D. Vibhute et al.
The soil classification was performed using SVM as explained in Sect. 2.2.2. The
Gaussian RBF kernel of SVM method was implemented for classification analysis.
In the SVM classification, the gamma (γ) value, penalty parameter C, classification
probability threshold was 0.010, 100 and 0.10 respectively. The classification accu-
racy was assessed with producer’s accuracy, user’s accuracy, overall accuracy and
kappa value. The produced soil classification map is located in Fig. 1.
The classification map clearly indicates that black cotton soils have covered most
area of the test site. As per the laboratory reports of soils, these black soils are deep
or heavy and medium or lighter as per its physical properties. The textures of black
soils are loamy to clayey with mixed carbonates (mostly CaCo3 ) and are suitable for
cotton cultivation. The organic matter and nitrogen found to be less in this soil and
pH values are near about 7–9. According to USDA soil taxonomy, the black soils
are vertisol, inceptisol, and entisol. The lateritic soil included only the alfisol in the
studied areas as per USDA soil taxonomy which found to be hilly part of the test site.
The pH values of these soils are low and organic matter is high with a fine texture.
Sand dunes were observed to be more at Riverside and hilly rocks due to the spectral
structure of sand dunes and rocks. The texture of sand dunes was sandy. Electrical
Conductivity (EC) values and organic matter contents are very low in sand dunes.
Natural vegetations including agricultural crops were accurately classified.
Spectral Feature Extraction and Classification of Soil … 531
The achieved classification accuracy of soil features was estimated by error matrix
using reference spectra of soils with ground control points. The overall classification
accuracy was achieved 92.76% with kappa coefficient 0.90. The error matrix with
overall accuracy, kappa value, user’s accuracy, producer’s accuracy and correctly
classified pixels of each class are shown in Table 2. The diagonal values of the error
matrix indicate the accurately classified pixels into their classes.
In fact, all the classes were classified appropriately as our aim was to identify and
classify soil types which were classified well. The class-specific accuracies, black
cotton soil, lateritic soil, and vegetations were classified with the higher accuracy
excluding sand dunes which were misclassified with settlements due to the mixed
similarity of spectral signature characteristics.
4 Conclusions
Acknowledgements The Authors would like to acknowledge to UGC for providing BSR Fel-
lowship and lab facilities under UGC SAP (II) DRS Phase-I F.No. -3-42/2009, Phase-II 4-
15/2015/DRS-II, DeitY, Government of India, under Visvesvaraya Ph.D. Scheme, DST-MRP-R
No. BDID/01/23/2014-HSRS/35(ALG-IV) and also extend our gratitude to DST-FIST program to
Dept. of CS & IT, Dr. BAM University, Aurangabad, M.S. India. We would also thankful to Prof.
D. T. Bornare and his team for physicochemical analysis of soil specimens at “MIT Soil and Water
Testing Laboratory, Aurangabad”, Maharashtra, India.
References
1. Vibhute AD, Gawali BW (2013) Analysis and modeling of agricultural land use using remote
sensing and geographic information system: a review. Int J Eng Res Appl (IJERA) 3(3):081–091
2. Ben-Dor E, Patkin K, Banin A, Karnieli A (2002) Mapping of several soil properties using
DAIS-7915 hyperspectral scaanner data: a case study over soils in Israel. Int J Remote Sens
23(6):1043–1062
3. Ben-Dor E, Banin A (1994) Visible and near infrared (0.4–1.1 mm) analysis of arid and semiarid
soils. Remote Sens Environ 48:261–274
4. Vibhute AD, Kale KV, Dhumal RK, Mehrotra SC (2015) Soil type classification and mapping
using hyperspectral remote sensing data. In: International conference on man and machine
interfacing (MAMI). IEEE, pp 1–4
5. Rodarmel C, Shan J (2002) Principal component analysis for hyperspectral image classification.
Survey Land Informat Syst 62(2):115–123
6. Hughes GF (1968) On the mean accuracy of statistical pattern recognizers. IEEE Trans Inf
Theor 14(1):55–63
7. Richards JA, Jia X (2006) Remote sensing digital image analysis an introduction, 4th edn.
Springer-Verlag, Berlin Heidelberg
8. Vibhute AD, Kale KV, Dhumal RK, Mehrotra SC Hyperspectral imaging data atmospheric cor-
rection challenges and solutions using QUAC and FLAASH algorithms. In: IEEE, international
conference on man and machine interfacing (MAMI), pp 1–6
9. Hatchell DC (1999) Analytical spectral devices, Inc. (ASD) Technical Guide, 3rd ed.
10. Bernstein LS, Jin X, Gregor B, Golden SMA (2012) Quick atmospheric correction code:
algorithm description and recent upgrades. SPIE Opt Eng 51(11). https://doi.org/10.1117/1.
OE.51.11.111719
11. Duda RO, Hart PE, Stork DG (2000) Pattern classification. 2nd ed., Wiley-Interscience
12. Gao J (2009) Digital analysis of remotely sensed imagery. The McGraw-Hill Companies Inc,
New York
13. Heras DB, Arguello F, Barriuso PQ (2014) Exploring ELM-based spatial–spectral classification
of hyperspectral images. Int J Remote Sens, Taylor & Francis, 35(2):401–423
Spectral Feature Extraction and Classification of Soil … 533
14. Vapnik VN (1999) An overview of statistical learning theory. IEEE Trans Neural Netw
10(5):988–999
15. Vibhute AD, Nagne AD, Gawali BW, Mehrotra SC (2013) Comparative analysis of different
supervised classification techniques for spatial land use/land cover pattern mapping using RS
and GIS. Int J Sci Eng Res 4(7):1938–1946
16. Vibhute AD, Dhumal RK, Nagne AD, Rajendra YD, Kale KV, Mehrotra SC (2016) Analysis,
classification, and estimation of pattern for land of Aurangabad region using high-resolution
satellite image. In: Proceedings of the second international conference on computer and com-
munication technologies. Springer India, pp 413–427
17. USDA (2014). Keys to soil taxonomy, United States department of agriculture. Natural
resources conservation service
18. Bilgili AV, Van Es HM, Akbas F, Durak A, Hively WD (2010) Visible-near infrared reflectance
spectroscopy for assessment of soil properties in a semi-arid area of Turkey. J Arid Environ
74(2):229–238
Performance Analysis of MIMO
MC-CDMA System Using Optimization
Algorithms
1 Introduction
In the mobile communication system when the signal propagates through free space,
its strength varies due to the presence of obstacles. To detect the original transmitted
bits, the receiver requires the knowledge of channel state information (CSI). Multiple
input and multiple output (MIMO) is the key technologies for beyond 3G mobile
communications. MIMO helps in increasing the data rate and reliability. Space–Time
Block Coding (STBC) is one of the functions of MIMO.
Space–time block codes provide spatial diversity and the diversity order depends
on the number of antennas at input and output side. Transmit diversity scheme is a
fetching technology for mobile wireless communication system. Transmit diversity
scheme provides better performance when the number of antennas is more at the
P. Sreesudha (B)
G. Narayanamma Institute of Technology & Science, Hyderabad, India
e-mail: sree.sudha53@gmail.com
B. L. Malleswari
Sridevi Women’s Engineering College, Hyderabad, India
e-mail: blmalleswari@gmail.com
transmitter side and single receiving antenna at the receiver side [1]. CDMA followed
by Orthogonal Frequency Division Multiplexing (OFDM) forms MC-CDMA, which
is an enabling technology for modern mobile communication systems [2, 3]. In this
paper, the MC-CDMA system is designed for four transmitting antennas and different
number of receiving antennas.
Optimization algorithms play an important role in solving any complex prob-
lem. In that, nature-inspired algorithms are very popular since these algorithms give
optimum solution for any complex problem. There are many such algorithms. One
such algorithm is Genetic Algorithm (GA). GA was inspired by Darwin’s principle.
STBC MC-CDMA system channel is estimated with the help of genetic algorithm
in [4]. GA cannot find an optimal solution and takes longer time for convergence.
Another popular algorithm is Differential Evolution in the evolutionary category [5].
Evolutionary algorithms suffer from slow convergence. One more algorithm is Sim-
ulated Annealing (SA), and it is popular for complex problems [6]. But disadvantage
is large computing time to get the desired solution. Other variations are also done
to improve the convergence rate of SA [7]. Channel estimation based on a Cuckoo
search for MC-CDMA system is implemented in [8].
Swarm Intelligence (SI)-based algorithms are widely used in many applications.
The SI algorithms work on the behavior of insects, birds, fishes, etc. Particle Swarm
Optimization is a very popular algorithm in this category. It was proposed by James
Kennedy and Russell Eberhart in 1995 [9, 10], where the algorithm uses the swarm-
ing behavior of fish or birds, etc. The individuals efficiently share information and
find global optimum in less number of iterations. And, implementation of PSO is
simple. Kinetic Gas Molecule Optimization (KGMO) algorithm is another kind of
optimization algorithm. It works on the behavior of gas molecules. KGMO is inspired
by the law of thermodynamics and heat transfer [11].
In Sect. 2, MC-CDMA system Model is discussed. In Sect. 3, proposed channel
estimation methods are discussed. Section 4 is about simulation results and finally,
the paper is concluded in Sect. 5.
2 System Model
Direct Sequence CDMA (DS-CDMA) was popularly used for 3G mobile systems.
And in order to increase the data rate and capacity, DS-CDMA system is com-
bined with Orthogonal Frequency Division Multiplexing (OFDM) that forms a multi-
carrier CDMA system. MC-CDMA system is defined that it is spread first and then,
it is transmitted through multiple carriers. It means the transmitted data is spread
first and then, it is transmitted through multiple carriers with the help of IFFT block.
To optimize the error rate, we proposed an efficient channel optimization algorithms
for MC-CDMA system with STBC coding scheme.
Figure 1 shows STBC MC-CDMA block diagram. The input data x[n] is spread by
using a CDMA encoder. The spread sequence is modulated and given to IFFT block.
Then, it is added with a cyclic prefix (CP). Then, the data is encoded with space–time
Performance Analysis of MIMO MC-CDMA System … 537
block coding and passed through the channel. At the receiver section, channel coef-
ficients are estimated by optimization algorithms then decoded to retrieve originally
transmitted data.
3 Proposed Methodology
In the proposed work, MIMO MC-CDMA system channel is estimated with KGMO
and PSO algorithms. MIMO MC-CDMA system for two and four transmitting and
different number of receiving antennas with KGMO channel optimization is imple-
mented in [12]. In this paper, the KGMO system is compared to PSO optimization-
based system.
A. KGMO Based Channel Estimation
In KGMO algorithm, gas molecules are treated as agents. The agent has several
parameters. They are mass, velocity position, and kinetic energy. Updating the posi-
tion of agents is done by updating the position and velocity of all agents [11].
vid (t + 1) Tid (t)wvid (t) + C1 randi (t)(gbest d − xid (t)) + C2 randi (t)(pbestid (t) − xid (t))
(1)
i 2(kid )
xt+1 (t + 1) + vid (t + 1) + xid (t) (2)
m
The vector pbestid (t) represents the individual best previous position, and gbest is
the global best position among all agents, C1 and C2 are two acceleration constants.
And ki is the change in kinetic energy, m is the mass of each molecule, and T is
the temperature [11].
538 P. Sreesudha and B. L. Malleswari
Step 5: Fitness is computed for each agent in the population. For each agent, the
received data is STBC decoded and is compared with ideal channel response
without noise.
Step 6: Updating the position of agents is done by updating the position and velocity
of all agents as shown in Eqs. (1) and (2) [11].
Step 7: Again fitness is computed for all the agents. If present fitness value is less
than the previous fitness value, then previous is replaced with present oth-
erwise previous value is retained.
Step 8: Among all, the minimum fitness value is considered as the optimum value.
And, the corresponding agent is considered as an optimum channel and it
is applied for decoding of the received data.
Step 9: The algorithm is terminated when a maximum number of iterations is
reached or desired performance is achieved.
where x is the position and v is the velocity of particles. And, C1 , C2 are cognitive and
social coefficients, d indicates the dimension, and r1 and r2 are uniformly distributed
random numbers. Subscript k indicates the kth particle, w is the inertia. pk is the
position best, and gbest is the global best.
Performance Analysis of MIMO MC-CDMA System … 539
In PSO, also similar method is implemented as KGMO algorithm, and here also
channel coefficients are randomly generated and that gives the population of the
algorithm. That forms the random generation of swarm particles. Then, the fitness is
computed and position is updated as per Eqs. (4) and (5). After updating the position
again, fitness is computed and compared with previous fitness values. The procedure
is repeated for a required number of iterations. Finally, global fitness is computed and
the corresponding channel coefficients are considered as optimum channel values.
4 Numerical Results
The system is simulated with MATLAB, and simulation parameters are shown in
Table 1. Simulations are carried out for different signal-to-noise ratios. For PSO
algorithm, C1 , C2 2 and w inertia weight value is 0.3 < w < 1. For KGMO algorithm,
C1 1, C2 3, inertia weight value is between 0.85 and 0.2, the mass of gas molecule
m is between 0 and 1. And temperature T is between 0.95 and 0.1 [11].
In Fig. 2, the system is implemented with four transmitting and one receiving
antennas. In Fig. 3, two receiving antennas are considered, and similarly in Fig. 4,
four receiving antennas are considered at the receiver side. In all cases, KGMO
algorithm is showing better BER performance as compared to PSO algorithm. Since
KGMO involves temperature and mass parameters and works on law of gases that
helps in finding a global optimal solution quickly.
In the 4 × 1 system at 10−1 , the SNR required for PSO is 17 dB and for KGMO-
based system 15.4 dB is required, for 4 × 2, 4 × 4 cases also, there is a reduction in
SNR with KGMO-based system. And in Fig. 5, all configurations are combinedly
shown. In Fig. 6, number of iterations is compared for both algorithms, and it is clear
that as the iterations are increasing BER performance is improving in the graph.
540 P. Sreesudha and B. L. Malleswari
pso (4x1)
kgmo (4x1)
-1
Bit Error Rate
10
-2
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No, dB
pso (4x2)
kgmo (4x2)
Bit Error Rate
-1
10
-2
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No, dB
pso (4x4)
kgmo (4x4)
Bit Error Rate
-1
10
-2
10
0 2 4 6 8 10 12 14 16
Eb/No, dB
pso (4x1)
kgmo (4x1)
pso (4x2)
kgmo (4x2)
pso (4x4)
kgmo (4x4)
Bit Error Rate
-1
10
-2
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No, dB
-1
10
-2
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No, dB
5 Conclusion
References
1. Alamouti SM (1998) A simple transmit diversity technique for wireless communications. IEEE
J Sel Areas Commun 16(8):1451–1458
2. Prasad R, Hara S (1997) Overview of multicarrier CDMA. IEEE Commun Mag 35(12):126–133
3. Sahu PR, Chaturvedi AK (2000) Application of multicarrier CDMA to mobile communication
technology. In: Proceedings of IEEE international conference on industrial technology 2000,
vol 1, pp 427–431
4. D’Orazio L, Sacchi C, Donelli M, Louveaux J, Vandendorpe L (2011) A Near-optimum mul-
tiuser receiver for STBC MC-CDMA systems based on minimum conditional BER criterion and
genetic algorithm-assisted channel estimation. EURASIP J Wireless Commun Netw 1:1–12
5. Seyman MN, Taspinar N (2013) Symbol detection using the differential evolution algorithm
in MIMO-OFDM systems. Turkish J Electr Eng Comput Sci 21:373–380
Performance Analysis of MIMO MC-CDMA System … 543
6. Paik C, Soni S (2007) A simulated annealing based solution approach for the two-layered
location registration and paging areas partitioning problem in cellular mobile networks. Eur J
Oper Res 178(2):579–594
7. Askarzadeh A, Leandro dos Santos C, Klein CE, Mariani VC (2016) A population-based
simulated annealing algorithm for global optimization. In: 2016 IEEE international conference
on systems, man, and cybernetics (SMC), pp 004626–004633
8. Balaji S, Vasudevan N (2014) Cuckoo search-aided LMS algorithm for channel estimation in
MC-CDMA systems. J Comput Sci 935–947
9. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE international conference,
vol 4, pp 1942–1948
10. Nahar AK, Bin Gazali KH (2015) Local search particle swarm optimization algorithm channel
estimation based on MC-CDMA system. ARPN J Eng Appl Sci 10(20):9659–9667
11. Moein S, Logeswaran R (2014) KGMO: a swarm optimization algorithm based on the kinetic
energy of gas molecules. Inf Sci 275:127–144
12. Sreesudha P, Malleswari BL (2017) An efficient channel estimation for BER improvement of
MC CDMA system using KGMO algorithm. In: 2017 International conference on communi-
cation and signal processing (ICCSP), pp 1004–1009
Quality Analysis for Real-Time Data
in MIMO Communication Using
Adaptive Kalman Filtration
Abstract This paper outlines an analysis of the signal estimation performance based
on Kalman-based filtration in comparison to the proposed unbounded Kalman filter.
In the signal estimation process, the channels are variant in a dynamic manner; the
time-variant channel has a dynamic impact on the transmitted signal. The conven-
tional Kalman estimators are bounded to error limit, where dynamic noise interfer-
ence leads to error minimization. The convergence of the error variance with repsect
to real-time data has been processed. This paper presents the effect of the proposed
estimation of the different data rates allocated and the noise variance observed at the
channel. The measuring qualitative metrics are validating the proposed approach for
real-time data.
1 Introduction
2 Literature Outline
In past development, various researchers have an outcome with a new approach of sig-
nal estimation in a MIMO communication system. These approaches are defined for
frequency selective attenuated channels, wherein communication process is defined
for the complex nature of the communicating signal. The pairwise channel affects
the block data in a random fashion, where the pairwise error channel appears as
a variance of SNR in the reverse polynomial mode. A novel pilot-based approach
for channel estimation and tracking is been defined in [1]. The presented approach
defines the estimation of channel estimation based on the pilot signal passed with the
transmitting bits, where pilot information used as a reference signal to the channel
estimation block is used for estimation of the channel in an adaptive manner. The
feedback estimator used the pilot signal as knowledge to converge the error. In an
oscillatory phase noise condition in [2], a joint estimation model is suggested. The
Quality Analysis for Real-Time Data … 547
approach defines the joint estimation of phase and frequency in noised channel con-
dition. The approach of the semi-blind condition is outlined in [3, 4] for a double
selective channel in MIMO system. The doubly selective channel affect the signal
in a dual manner, and the semi-blind model using feedback estimator is suggested.
T minimize the estimation error under such dynamic condition in [5] a knowledge-
based channel estimation using fuzzy decision rules is been suggested. The approach
defines fuzzy rules for Kalman-based estimator to derive channel estimation. The sig-
nal estimation, in this case, observed to be faster under time-varying conditions. To
improve the performance in [6, 7] a faster estimation approach in a MIMO-OFDM
communication system is suggested. Under the user mobility, the channel effect is
focused on minimization. An estimation approach based on channel estimation and
superimposition of training sequence on the MIMO-OFDM system is outlined in
[8]. The signal boundness losses to the actual value with the increase in block length
in such system, to obtain minimum channel interference in block fading channels as
presented in [9], a random upper bound [10, 11] and a lower bound coding [12], with
instantaneous channel capacity developed for the improvement of MIMO system.
As the upper and lower bound converges at the channel outage probability for large
MIMO blocks, the semi-blind approach of signal estimation based on frequency
domain analysis outlined in [13], wherein timing synchronization process is devel-
oped for the signal estimation in channel variant condition. To minimize the effect
of time-variant noise effect on real-time data a dual bound estimation in Kalman
filtration is suggested.
3 System Model
To develop estimation logic under the time-variant condition, a MIMO system fol-
lowing the OFDM coding, a conventional MIMO-OFDM communication system is
shown in Fig. 1.
where ϕk (t) is defined as the discrete channel coefficients, on the signal received at
the receiver, the cyclic prefix has been removed and frame symbol is recovered. For
the derived frame data, N-point DFT is processed for the received sampled Rk (t)
giving the output as,
N −1
1 2π ki
Rk (t) √ ci (t) · exp −j , 0≤i ≤N −1 (3)
N i0 N
The problem of channel estimation, in this case, is to derive the signal offset and
the signal attenuation observed at the channel propagation under time-variant con-
dition, for a single user data, the estimation under time-variant channel interference
is not a trivial task; however, with multiple user data accessing simultaneously, the
channel effect at each signal is highly variant, and the interference observed is highly
destructive in nature the signal with low power level, in this case, is unpredictable
in the presence of higher power levels. The conventional sliding correlator used a
correlate logic to compare the received signal with a reference signal to derive the
correlator error. The transmitted data is available with the output channel communi-
cating with a delay spaced for every t sec. The delay of largest information bit is taken
as the reference delay in the channel. The estimation logic of the conventional model
is illustrated in Fig. 2. The estimation factor depends on the peak autocorrelator and
low cross-correlation in the estimated channel information.
At the channel is observed to be an AWGN, the estimation unit is highly effective
at co-channel communicating users. The linear estimator in such system fails to give
estimation under dynamic conditions. Adaptive filters are hence defined for signal
estimation. Among various formats of the estimator, Kalman filter has gained more
Quality Analysis for Real-Time Data … 549
attention due to a simpler and effective estimation process, where the estimator has
a state transit process for estimation.
4 Estimator Design
Kalman filters are one of the optimal feedback estimator units, used in signal esti-
mation in the communication domain. Wherein Kalman filters are used in various
signal processing approaches, the ease of decision logic and design modeling gives
the advantage of its implementation in real-time communication units. The filter uses
a feedback control to estimate the processing state and measures the feedback esti-
mation as a measurement to make a decision. The process of estimation is executed
in two phases of operations, (1) the time updation and (2) the measurement updation
as shown in Fig. 3.
The time update process estimates the forwarding error as a prior estimate t and
correction estimate the next step of estimation. The two operations took as a predic-
tion and correction operation, respectively. The operation of a Kalman feedback is
illustrated in Fig. 4.
This estimator unit is used as a process of the linear system model, where the
measuring and correction logic is used as an optimal recursive linear estimator. The
Kalman feedback logic estimates the noise vector as a measure of noise variance. For
an estimation of ‘X0 ’ from the received signal ‘Y’ an initial estimator unit develops
a conjunction value of as ‘X0 ’ as ‘X1 ’ and derives an optimal estimator using ‘X0 ’
and ‘X1 ’. The process used is to derive an estimation of Y in reference to ‘X0 ’, ‘X1 ’,
…, ‘Xj ’ for j 1, 2, … defined for the estimation recursive time-variant signal.
In the estimation process, the Kalman filtering minimizes the “average” estimation
error. Here, the estimation is fed back to derive a minimization of estimation error to
a minimum level of tolerance. However, the conventional Kalman filtration has the
limits.
(1) In the estimation process using a Kalman filter, it assumes that the noise prop-
erties are known. If the system is unknown Kalman filter fails to predict the
effect.
(2) The Kalman filter process to minimize the “average” estimation error, in the
case of dynamic time-variant channel condition, the average estimation error
does not converge.
This limitation tends to slow or lower the estimation performance in Kalman-
based estimator logic. To obtain estimation under high dynamic channel conditions,
new unbounded estimator logic is suggested. The proposed estimator logic, derive
an open bound estimation limiting the tolerance value for a minimum and maximum
estimates. With the above-stated limitation, a new estimation logic based on min-max
estimation is proposed. The proposed min-max filtering minimizes the “worst-case
error” observed at the time-variant channel condition. In the Kalman estimation, the
state is considered to be a linear dynamic state estimate is given by
where A, B, and C are known parameters, k is the time metric, x is defined as the state
of the system, u is the reference input signal, y is the reference output parameter, and
w and z are the noise matrices.
Compute state x in the measurement phase, where the state cannot be measured
directly, whereas y is measured directly. In this case, using Kalman to derive state
x2Q xT Qx (6)
J (8)
avewk W + avevk V
Here, an average s taken over a period of time for k samples. The estimate is
computed for the maximum correlation and minimum variance from the input x.
5 Experimental Results
In the receiver unit, these channel-affected signals are received simultaneously and
passed to the demodulator unit. The 16 QAM demodulation operation is carried
out to recover the signal back. The received signal then processed for cyclic prefix
removal and passed to the estimator unit. The adaptive unbounded Kalman esti-
mator is used for the estimation error minimization, the blocks recovered after the
estimation process is shown in Fig. 7.
Quality Analysis for Real-Time Data … 553
The estimation is iterated for all the processing blocks and estimated bits are
buffered to generate the transmitted information back. Figure 8 illustrated the recov-
ered information for the developed system.
To measure the quality assessment for the processed data the quality metric of
‘peak signal to noise ratio (PSNR)’, ‘spatial similarity index measure (SSIM)’, and
‘mean square error (MSE)’ is used, tested over different noise effects. PSNR is
used as a measuring parameter for signal strength recovered over the channel noise
interference measured. It is given as
2
I
PSNR(dB) 10 log10 max (9)
MSE
where Imax is the maximum value of the test sample given, as an error measuring a
parameter, Mean Square error (MSE) is used, defined as the mean error between the
original and the recovered sample. This parameter is given as
1
MSE (x − x̂)2 (10)
M ×N
x(i, j) ⊗ x̂(i, j)
SSIM i i (11)
i (x(i, j))
2
i
0.68
MSE
0.66
0.64
0.62
0.6
10 20 30 40 50 60 70 80
Data Rate(Bits/Sec)
37.8
37.6
37.4
PSNR (dB)
37.2
37
36.8
36.6
36.4
36.2
10 20 30 40 50 60 70 80
Data Rate(Bits/Sec)
Fig. 10 PSNR over variation in noise density for the test sample
the obtained SSIM values over varying noise density is computed in Fig. 11. The
similarity index is observed to be 0.6 units higher for the proposed approach as
compared to the conventional model. Observation for the developed system for test
samples with different noise variation is given in Table 1.
556 G. Rajender et al.
1.6
1.5
SSIM
1.4
1.3
1.2
1.1
10 20 30 40 50 60 70 80
Data Rate(Bits/Sec)
Table 1 Observation for the developed system for test samples with different noise variations
Noise Conventional MIMO Proposed
variance system communication system
MSE PSNR SSIM MSE PSNR SSIM
0.1 1.53 46.42 0.83 1.53 46.6 0.88
0.3 1.62 46.2 0.76 1.62 46.38 0.79
0.6 1.74 45.9 0.62 1.7 46.15 0.67
0.8 1.8 45.75 0.47 1.75 46.05 0.59
6 Conclusion
This paper outlines an analysis of the proposed unbounded Kalman estimation for
the MIMO communication system. The developed system is evaluated for different
test images and audio samples. The test is carried out on the variation of the noise
variance of the channel, and quality metrics were measured in terms of PSNR, MSE,
and SSIM. The observations made for the developed system shows an improvement
in PSNR value from 2 to 4 dB enhancement as compared to conventional MIMO
Kalman coding. The similarity index is enhanced by 30–50% under different noise
levels. The similar analysis for audio samples is carried out. The enhancement in the
evaluation metrics validates the usage of unbounded Kalman filter for audio signal
communication.
Quality Analysis for Real-Time Data … 557
References
1. Siyau M, Li T, Prieto J, Corchado J, Bajo J (2015) A novel pilot expansion approach for MIMO
channel estimation and tracking. In: ICUWB. IEEE, pp 1–5
2. Mehrpouyan H, Nasir AA, Blostein SD, Eriksson T, Karagiannidis GK, Svensson T (2014)
Joint estimation of channel and oscillator phase noise in MIMO system. IEEE Trans Signal
Process 60(9):4790–4807
3. Movahedian A, McGuire M (2014) Efficient and accurate semi blind estimation of MIMO-
OFDM Doubly-selective channels. In: 80th VTC Fall. IEEE, pp 1–5
4. Zhang S, Wang D, Zhao J (2014) A Kalman filtering channel estimation method based on state
transfer coefficient using threshold correction for UWB systems. Int J Future Gener Commun
Netw 7(1):117–124
5. Gutta V, Anand KKT, Movva TSVS, Korivi BR, Killamsetty S, Padmanabhan S (2015) Low
complexity channel estimation using fuzzy Kalman filter for fast time varying MIMO-OFDM
systems. In: ICACCI. IEEE, pp 1771–1774
6. Natori T, Tanabe N, Furukawa T (2014) A MIMO–OFDM channel estimation algorithm for
high–speed movement environments. In: ISCCSP. IEEE, pp 348–351
7. Nair JP, Raja Kumar RV (2010) Optimal superimposed training sequences for channel estima-
tion in MIMO-OFDM systems. EURASIP J Adv Signal Process (Springer, Hindawi Publishing
Corporation) 13
8. Zhong K, Lei X, Dong B, Li S (2012) Channel estimation in OFDM systems operating under
high mobility using Wiener filter combined basis expansion model. EURASIP J Wirel Commun
Netw (Springer) 186
9. Wang Y, Li L, Zhang P, Liu Z (2009) DFT-based channel estimation with symmetric exten-
sion for OFDMA systems. EURASIP J Wirel Commun Netw (Springer, Hindawi Publishing
Corporation) 8
10. Chen Y-S, Wu J-Y (2012) Statistical covariance-matching based blind channel estimation for
zero-padding MIMO–OFDM systems. EURASIP J Adv Signal Process (Springer) 139
11. Yen RY, Liu H-Y, Tsai C-S (2012) Iterative joint frequency offset and channel estimation for
OFDM systems using first and second order approximation algorithms. EURASIP J Wirel
Commun Netw (Springer) 341
12. Cicerone M, Simeone O, Spagnolini U (2006) Channel estimation for MIMO-OFDM system
by modal analysis/filtering. IEEE Trans Commun 54(11). IEEE
13. Kung T, Parhi KK (2013) Semi-blind frequency domain timing synchronization and channel
estimation for OFDM systems. EURASIP J Adv Signal Process (Springer)
A Novel Test Programs for Hybrid RISC
Controller
Abstract Most of the embedded system needs flexibility and performance. In order
to improve these factors, Hybrid RISC Controllers are being used in present embed-
ded systems. Therefore, it is necessary to detect the faults in Hybrid RISC Controllers
and correct them. Previously several approaches have been developed for different
processors to identify the permanent faults. Software-Based Self-Test (SBST) meth-
ods are used for generating the test programs automatically. However, by using these
approaches test time is high and also the Device Under Test (DUT) may not be correct
even though every line of code is executed. In this paper, a novel method is proposed
which is called VHDL Verification Methodology (VVM)-based test programs are
used to make the detection and correction of faults easier, and the test duration is
analyzed.
1 Introduction
Nowadays the size of the integrated circuits has become the major factor of the design.
Therefore, scaling of the manufacturing process is concentrated mainly. Due to this
scaling, there may be a chance of occurring problems in the testing of processor
chips. In addition to this, permanent faults may occur even during the operational
phase as a result of metal migration phenomena or aging of the circuit [1].
In SoC (System-on-Chip) designs, there are one or more than one embedded
processors may be used. It also consists of other cores like embedded RAM or ROM
cores in order to store data, which is to be processed [2]. Hence, the engineers are
facing a cumbersome work in the testing of the processor cores. Along with this, if the
operating frequencies of Automatic Test Equipment (ATE) and operating frequencies
of SoC have a large difference, then all the permanent faults will not be detected,
which will be appeared only when testing is performed at the actual speed of the
IC (at-speed testing) [2]. Therefore, to find out the faults there is the necessity of
developing appropriate test programs to gain high fault coverage and also to minimize
the test duration.
2 Related Work
Several methodologies have been developed to improve the fault coverage for proces-
sor cores and processor-based System-on-Chip (SoCs) like Built-In-Self-Test (BIST)
and Software-Based-Self-Test (SBST) [3].
Among these methodologies, some of them will be using external hardware to
perform a test, which may be not accurate this is a result of increasing gap between
Automatic Test Equipment (ATE) and SoC operating frequencies. Therefore, external
at-speed testing becomes problematic and expensive.
In modern-day ICs, there is a chance of getting faults even though the manufactur-
ing test result is positive. So in order to ensure a high quality of service, there is
a requirement for additional testing. One such test procedure is Built-In-Self-Test
(BIST). It will check the circuitry every time before their operation is started. BIST
is also like offline testing which uses the ATE and the test pattern generator and the
test response analyzer are on-chip circuitry.
BIST is used to shift the testing task from Automatic test equipment (ATE) to inter-
nal hardware: To perform self-testing additional hardware and software are integrated
into the circuit. This self-testing technique will reduce the cost and time required for
testing the circuitry; it also helps in improving the fault coverage but at a cost of
increased silicon area. It also plays a major part in deciding the time taken for the
design, performance, circuitry cost, power consumption due to high switching activ-
ity which is caused because of random pattern generation, whose practical value will
be minimum.
A Novel Test Programs for Hybrid RISC Controller 561
The SBST will generate the test programs that will be executed by the processor and
which has a capacity to be exercised fully by the processor or other components in
the system. The faults which are obtained from the results are detected easily. This
SBST has one important advantage that is extra hardware is not needed. So that area
will be minimized and which in turn reduces the test cost. In addition to this SBST
facilitates at-speed testing and also it can be used for online testing. By considering
all these factors SBST is applied widely for testing of processors and SoC.
As up to now the approaches implemented on processor cores mainly make use
of pseudorandom instruction sequences and operations/operands. The functional
approaches can be classified into two subclasses. The strategies in first-class mainly
depend on code randomizers to achieve test programs. The second class of approaches
makes use of feedback strategy, which means generated test programs will be eval-
uated according to suitable metrics and mainly concentrates on improving the test
patterns already generated.
Along with these, there are many other approaches for processor cores, among
them one is structural testing methodology. According to this, there are two stages
involved in this approach, out of this two stages, the first stage is the test preparation
stage, for each processor component pseudorandom pattern sequences are developed
based on iterative methods and by considering the constraints imposed in the instruc-
tion set. So, thereafter self-test signatures are encapsulated with test sequences which
help in differentiating each component and this self-test signatures consists of the
seed and the pseudorandom TPG configuration, and also the number of test patterns.
Then there comes the second stage, the test application stage, here the self-test sig-
natures of the components are expanded on-chip by using LFSR which is emulated
in software into pseudorandom test patterns, which is stored in embedded memory
then those patterns are applied to the component [4].
Two subcategories are present in the structural method the first category is hier-
archical which focus on one processor’s modules one at a time. The second group
is RTL, contains the methods by which structural RTL information along with ISA
information can be extracted by the test programs generation, to generate the instruc-
tion sequence templates. These are used for verifying and propagating the faults
present in the module under test. Then these templates are adjusted with respect to
the module’s testability requirements.
For complexly embedded processors, the SBST methodology which is proposed
consists of the three phases shown in Fig. 1. Phase A: In this phase, the processor
components are identified, then the operations of each component and for which
instructions these components are exiting is also identified. And finally, instructions
that are used for controlling or observing processor registers are identified.
Phase B: The components of the processor having the same properties and
component prioritization for test development are categorized. Phase C: Finally,
self-test routines are developed by using compact loops of instructions; depending
562 T. Subhashini et al.
on reusing of the test algorithms from the library for generic functional components
small precomputed test sets are generated. Fault coverage by using this algorithm
will be high [5].
3 Proposed Methodology
The functionality of any processor verified either by checking results of its func-
tional units for a given set of inputs or by verifying the data stored in its memory. Here,
by using VVM (VHDL Verification Methodology) the data stored in the memory is
verified. If any error is present in stored data, then it is corrected.
Standard VHDL has the features which are required for code randomization of
stimulus and also for the functional coverage. These features are very important in
case of verifying larger, system-level designs. But, the main disadvantage of this
approach is that it is quite advanced and also it requires very high-level coding skills.
To avoid these problems, VHDL Verification Methodology is used for generating
test programs for Hybrid RISC Controllers. It allows adopting constrained random
verification techniques. By using this constrained random verification, the complex
system can be verified by using randomized inputs. So that it is possible to check the
design for all possible inputs.
4 Results
The VVM-based test programs for hybrid RISC controllers was implemented in
Xilinx ISE 14.1 and automotive Spartan 6 family is used and the simulation is done
by Isim simulator and the results are shown below. Figure 2 shows RTL for Hybrid
RISC Controller.
The TTL for Hybrid RISC Controller is shown in Fig. 3.
564 T. Subhashini et al.
Fig. 4 Simulation results for test programs for hybrid RISC controller
Simulation waveforms for test programs for Hybrid RISC Controller show that if
any error is present in the data that is stored in the memory of Hybrid RISC Controller
will be detected and corrected as shown in Fig. 4.
Comparison of the time duration of existing and proposed work given in Table 1.
A Novel Test Programs for Hybrid RISC Controller 565
Table 1 Comparison of time Time (ns) for existing work Time (ns) for proposed work
analysis
5.307 4.647
5 Conclusion
As fault coverage is not satisfactory in all the existing methodologies. Here, a new
approach is proposed that is VVM-based test programs for Hybrid RISC Controllers.
By using this methodology, the data stored in the memory is verified and if any error
is present in that stored data then it is corrected and the test duration is analyzed.
References
1 Introduction
In the present century, wireless mode of communication has become a basic means
for information exchange. The cellular and Internet networks are infrastructure-based
networks and rely on a base station or on a server, controls the flow of the informa-
tion. Without any pre-established infrastructure, communication cannot commence
in these networks. Building the infrastructure is a time taking the process and requires
heavy investments. Further, if a base station has some problem, then it affects the
entire network. Therefore, a new communication network needs to be established
P. Lavanya (B)
Sreenidhi Institute of Science and Technology, Hyderabad, Telangana, India
e-mail: lavanyamam@yahoo.com
V. S. K. Reddy
Mallareddy College of Engineering and Technology, Hyderabad, Telangana, India
e-mail: vskreddy2003@gmail.com
A. Mallikarjuna Prasad
Jawaharlal Nehru Technological University, Kakinada, Andhra Pradesh, India
e-mail: a_malli65@yahoo.com
which does not require any centralized administration which is precisely the wireless
ad hoc networks.
Ad hoc network is a continuously and dynamically self-organized network without
any pre-existing infrastructure and containing devices called nodes that connected
to each other forming a temporary network [1]. Applications of the ad hoc networks
include communication among a group of soldiers in inhospitable terrains, a tempo-
rary communication among delegates in a conference for a quick communication, in
emergencies where the conventional infrastructure-based communication facilities
failed to serve the purpose and in rescue operations, for communication within rescue
team [2].
Mobile ad hoc networks (MANETs) are one class of wireless ad hoc networks.
Being mobile, the nodes change the topology of the network consistently and cause
of this dynamic environment, it is very often to get link breakages. MANETs, being
multi-hop networks and with limited resources, need an efficient routing protocol.
Meeting QoS (Quality of Service) parameters like delay, jitter, throughput, and band-
width in such dynamic environments is much more complex problem that to be
addressed. Routing is one way of attaining QoS needs in MANETs [3, 4]. Figure 1
gives an idea of ad hoc network.
This paper presents the simulation and QoS (Quality of Service) metrics compar-
ison of two basic and most popular routing protocols of MANETs, which include
Simulation and QoS Metrics Comparison of Routing Protocols … 569
proactive DSDV and reactive AODV. Section 2 describes the routing protocols and
in specific DSDV and AODV in brief. Section 3 is about performance and simula-
tion environment details, whereas Sect. 4 is a discussion of the simulation results.
Section 5 gives the conclusion and future scope of the paper.
2 Routing Protocols
Routing can be defined as the pathfinding process from a source node to the ultimate
destination through which the packets should go. A routing protocol is an algorithm or
set of rules to be followed by nodes to determine how routing should be performed.
Routing protocols in MANETs broadly categorized into proactive, reactive, and
hybrid [5]. In proactive routing, routes are available to each node of the network from
any node any time, whereas in reactive routes are discovered when needed. As per
their working principle, proactive routing protocols offers low delay with increased
routing overhead, whereas the reactive routing protocols exhibit the reverse. Hybrid
routing protocols combine the best features of both proactive and reactive.
DSDV is the table-driven routing algorithm [6, 7]. It is the extension of Distance
Vector Routing (DVR) protocol overcoming the main disadvantage of DVR using
sequence numbers. Each mobile node in DSDV maintains a routing table containing
routes to every other node of the network. DSDV uses periodic and triggered routing
updates to maintain updated table information. Whenever the changes in the network
topology are noticed, the triggered routing updates are used to propagate the routing
information as quickly as possible. Two types of routing table updates are available in
DSDV: incremental and full dump. In the later one, the entire routing table is being
shared among the nodes, whereas in incremental only the changes that happened
since the last update will be shared [8].
4 Results
Figure 2 illustrates the network setup in the simulator. Performance parameters con-
sidered for the evaluation are PDR expressed in percentage and average end-to-end
delay in ms. The aim of the experiments is to examine the behavior of DSDV and
AODV for different data rates with different speeds of the nodes.
Simulation and QoS Metrics Comparison of Routing Protocols … 571
Table 2 lists the values of PDR and average end-to-end delay as a function of
mobility of the nodes and data rates. These results are the average of 10 simulations
in each respective simulation environment. Figure 3 shows the variation of PDR for
data rates of 50 kbps, 100 kbps, 250 kbps, and 500 kbps in a, b, c, and d, respectively.
AODV is superior to DSDV in all the cases for the considered mobility speeds of 1,
5, and 10 m/s except at 500 kbps, whereas the behavior of DSDV looks good when
an average end-to-end delay is considered for the same set of parameters shown in
Fig. 4.
From Table 1, increase in data rate makes the protocols less efficient in terms of
showing decreasing PDR and raising delay values. In both AODV and DSDV, low
PDR values with high node mobilities are observed. Furthermore, no protocol is
superior to other. In PDR case, AODV is performing well to DSDV, whereas the
DSDV is satisfactory when the delay parameter is taken.
The main challenge of mobile ad hoc networks is to route with minimum over-
head and minimum delay even when conditions are dynamic. From the results, it is
observed that a single routing protocol is not sufficient to fulfill all the requirements of
an ideal routing protocol. Finding the most efficient algorithm for given arrangement
572 P. Lavanya et al.
Fig. 3 PDR for a 50 kbps, b 100 kbps, c 250 kbps, d 500 kbps
Simulation and QoS Metrics Comparison of Routing Protocols … 573
Fig. 4 Average end-to-end delay for a 50 kbps, b 100 kbps, c 250 kbps, d 500 kbps
574 P. Lavanya et al.
would help in optimizing the performance of the network. Further, as the mobile
usage is increasing at a rapid pace, need for determining the most suitable algorithm
that is most efficient in all sorts of topological conditions is ever increasing.
References
1. Murthy CSR, Manoj BS (2004) Ad hoc wireless networks: architectures and protocols. Pearson
Education, pp 299–363
2. Boukerche A, Turgut B, Aydin N, Ahmad MZ, Bölöni L, Turgut D (2011) Routing protocols
in ad hoc networks: a survey. J Comp Netw 55(13):3032–3080
3. Layuan L, Chunlin L, Peiyan Y (2007) Performance evaluation and simulations of routing
protocols in ad hoc networks. J Comput Commun 30(8):1890–1898
4. Sharma SK, Sharma S (2017) Improvement over AODV considering QoS support in mobile
ad hoc networks. Int J Comput Netw Appl 4(2):47–61
5. Abolhasan M, Wysocki T, Dutkiewicz E (2004) A review of routing protocols for mobile ad
hoc networks. J Ad hoc Netw 2(1):1–22
6. Arya V (2013) A survey of enhanced routing protocols for MANETs. IEEE Commun Mag
3:1–9
7. Royer EM, Toh CK (1999) A review of current routing protocols for ad hoc mobile wireless
networks. IEEE Pers Commun 6(2):46–55
8. Thampuran SR (1999) Routing protocols for ad hoc networks of mobile nodes. Department of
Electrical and Computer Engineering, University of Massachusetts
9. Perkins C, Belding-Royer E, Das S (2003) Ad hoc on-demand distance vector (AODV) routing
(No. RFC 3561). Internet Society
10. Adam G, Bouras C, Gkamas A, Kapoulas V, Kioumourtzis G, Tavoularis N (2011) Performance
evaluation of routing protocols for multimedia transmission over mobile ad hoc networks. In:
Simulation and QoS Metrics Comparison of Routing Protocols … 575
4th joint IFIP wireless and mobile networking conference (WMNC), October 2011. IEEE, pp
1–6
11. Kadam AD, Wagh SS (2013) Evaluating MANET routing protocols under multimedia traffic.
In: 4th IEEE international conference on computing, communications and networking tech-
nologies, July 2013, pp 1–5
12. Broch J, Maltz DA, Johnson DB, Hu YC, Jetcheva J (1998) A performance comparison of
multi-hop wireless ad hoc network routing protocols. In: 4th annual ACM/IEEE international
conference on mobile computing and networking. ACM, pp 85–97
Hand Gesture-Based User Interface
for Controlling Mobile Applications
Abstract Nowadays, mobile devices have become an essential element of life for
every individual. The use of mobile devices and applications vary from person to
person. The user interaction mechanism in mobile devices has changed from keypads
to touchpads. People prefer to do multitasking while using mobile devices which have
created a need to find more natural ways of interacting with mobile devices. Visual
image processing is one of the areas that can help to implement such natural methods
of interactions using various gestures of the body as input to control not only mobile
devices but the various applications while performing multitasking. Here, the key
challenges include recognition of gestures and implementing controlling mechanism.
This paper describes the use of our previously implemented methods for static and
real-time hand gesture recognition to communicate and control the applications on
mobile devices through hand gestures.
1 Introduction
The heavy workload in individual’s life creates the situations which demand to mul-
titask. Mobile devices are the ones with which people prefer to perform multitasking.
For example, while having lunch or dinner, people generally prefer to answer calls
A. V. Dehankar (B)
Department of Computer Technology, Priyadarshini College of Engineering, Nagpur, India
e-mail: archana_dehankar@rediffmail.com
S. Jain
Shri Mata Vaishno Devi University, Jammu, India
e-mail: dr_sanjeevjain@yahoo.com
V. M. Thakare
CSE Department, Amravati University, Amravati, India
e-mail: vilthakare@yahoo.co.in
or make calls to save time. It may lead to hazardous events, e.g., encountering an
accident due to the use of mobile phone while driving. To control such events, vari-
ous government authorities have put various rules and regulations all over the world.
A debatable question “Do people follow these rules and regulations?” is out of the
scope of this paper.
The important aspects highlighted in this paper are hand gesture recognition
mechanisms, communication, and controlling mechanism. The paper concludes by
describing how the objectives are met and provides the comparison of existing
approaches with the proposed approach.
2 Literature Review
There are several applications and researchers available in the literature which focuses
on utilizing hand gestures for controlling desktop applications or other real-life appli-
cations. As the proposed work is more oriented towards utilizing hand gestures for
controlling mobile devices and applications, the literature review of studies available
on the combination of hand gesture and mobile devices are given in this paper.
Static and dynamic are the two categories of hand gestures. Static hand gestures are
the postures of different combinations of fingers of the hand and generally recorded
as a still picture, whereas dynamic hand gestures are captured in real time from a
moving video input [1, 2]. Static gestures or postures are simple and require a less
computational complexity [3, 4].
Cheng et al. [5] have presented a contactless gesture recognition system using only
two infrared proximity sensors. The system allows a user to flip e-book pages, scroll
web pages, zoom in/out, and play games on mobile devices using intuitive gestures,
without touching the screen or wearing/holding any additional device. Using the
proposed IR feature set and classifier, the system recognizes gestures with 98%
precision and 88% recall rate. In paper [6], Lee et al. have implemented a hand gesture
recognition system based on computer vision. To recognize the gesture, the phrases
used are background subtraction, conversion to YCbCr color space, locating the skin
region, noise removal using morphological and connected component method and
lastly recognition. Recognition accuracy is about 94.6% and the processing time of
39 ms in case of static gesture. If applied to motion gesture images, the accuracy
achieved is 89% and the processing time is 55 ms for each gesture.
Saxena et al. [7] have implemented a hand gesture recognition system which
uses client-server architecture. Here, images captured from the Android device are
preprocessed (edge detection, thinning, etc.), on the server. Shapes extracted were
used as a pattern and classified using a neural network classifier. The experimental
results showed that the accuracy achieved is 77%, but processing time is more because
of a client-server architecture.
In [8], the authors have implemented a static gesture recognition system which
uses the camera of Android device to capture hand gesture. The input image is
segmented using thresholding and resultant binary image undergoes operations like
Hand Gesture-Based User Interface … 579
rotation, cropping, normalization, etc. PCA and SVM are used for successive feature
extraction and classification stages, respectively. The recognition accuracy is 97.6%.
An and Hong [9] have proposed a method that can be handled through the single
hand, which holds a mobile phone and makes use of the rear-facing camera to capture
the gestures. They have used various fingertip gestures to perform click-and-move
operations: up, down, left, and right. The authors used skin color segmentation, mor-
phological operation, and skeletonization for finger tracking. This system recognizes
five dynamic gestures in varying background with uniform and low-lighting condi-
tions using 1.3 MP Camera. The gesture recognition accuracy is 88% in uniform
lighting conditions and when the background does not contain color similar to skin
color.
Lahiani et al. [10, 11] have proposed a system to recognize hand gestures using
SVM. Here, the image is captured using the frontal camera of the smartphone, after
which hand segmentation, extraction of features like contours described with convex
polygons to get the information about fingertips and then classification.
The system is divided into three modules, viz. Gesture Processing Module, Gesture
Communication Module, and Action Module as shown in Fig. 1.
Recognition of gesture plays an important role in the system as the correct recognition
will ensure the required action to happen. In the gesture processing module, the user
can provide static gestures or real-time gestures. Gestures may be performed using
either left or right hand.
Initially, the static gesture images captured runtime through laptop/web camera are
recognized on the system using the proposed Accurate End Identification (AEPI)
Pattern 1 Gesture
(Patterns for identified is: 1
gesture
1––PFG-1)
Pattern 2 Gesture
(PFG-2) identified is: 2
Pattern 3 Gesture
(PFG-3) identified is: 3
Pattern 4 Gesture
(PFG-4) identified is: 4
Pattern 5 Gesture
(PFG-5) identified is: 5
method [12]. As shown in Table 1, the gestures are assigned meaning from one to five.
The meanings are then later mapped with actions to control the mobile applications.
The AEPI method is implemented in five steps: image acquisition and prepro-
cessing, removing unwanted objects, black holes and noise, centroid detection [13],
and thinning process and gesture recognition. The implementation details of AEPI
method along with results are discussed in detail in our previous work [12, 14,
15]. AEPI method is tested on images having a uniform, varying backgrounds, and
images with multiple objects [15]. The recognition accuracy of AEPI method is
93.67%, which is comparatively high with the existing methods [15]. The recogni-
tion accuracy for each pattern is calculated using the Eq. 1.
RA ARI / TI ∗ 100 (1)
where
RA Recognition Accuracy
ARI Total number of Accurately Recognized Images
TI Total number of test Images.
Hand Gesture-Based User Interface … 581
Pattern 1 (PFG-1)
Fig. 2 Communication
module Matlab Client
Port No 5000
Java Server
Port No 3000
Android Client
582 A. V. Dehankar et al.
To control the activities of music player, images captured in the static format are
used as an input to the system. Static input is recognized using the gesture processing
module and recognized gesture is communicated to an Android client running on
the Emulator. Based on the gesture recognized, the actions like play, next, previous,
pause, etc., are performed. Table 2 shows possible patterns for gestures, recognition
results, and actions on an Emulator.
The real-time gesture input is used to control the activities in Google Chrome. As
shown in Table 3, for any of the input detected from the possible input combination
of five different patterns, the action is performed on the Google Chrome to open a
specific application/website. For pattern 1 input, Facebook is opened. For pattern 2
input, Gmail is opened. For pattern 3 input, Yahoo is opened. For pattern 4 input,
Youtube is opened and for pattern 5 input, Tutorial Point is opened.
Hand Gesture-Based User Interface … 583
Pattern 2
(PFG-2)
Play Next
Song
Pattern 3
(PFG-3)
Play Music
Pattern 4
(PFG-4)
Pause
The aim of this work was to control mobile applications using a hand gesture. We
have successfully used our previously implemented methods for static and real-time
gesture recognition to control the activities on the mobile Emulator. The results of
AEPI and RTEPI methods are successfully communicated using gesture communi-
cation module and the action module performs the actions as expected. The following
two objectives are successfully achieved.
1. Controlling of mobile applications using runtime static gestures
2. Controlling of mobile applications using real-time gestures.
The proposed system has several merits over existing systems. It works on runtime
captured static images and video, and hence it is more robust. It works with varying
backgrounds and multiple objects. It is not for a specific database. The system works
with hand gestures performed by either right or left hand and any user.
584 A. V. Dehankar et al.
Table 3 Input gestures, expected result of recognition and action in Google Chrome
Pattern 2
(PFG-2)
Open Gmail
Pattern 3
(PFG-3)
Open Yahoo
Pattern 4
(PFG-4)
Open
Youtube
Pattern 5
(PFG-5)
Open Tuto-
rials Point
Hand Gesture-Based User Interface … 585
5 Conclusions
The proposed system is able to effectively recognize the hand gesture from static
and real-time video inputs. The recognition results are communicated to the Android
Emulator and expected actions are performed as long as the results of recognition
are accurate. The results of static hand gestures are used to control the activities
of the music player on an Emulator. Using static hand gesture, four actions are
performed in music player which are play music, play next song, play previous
song, and pause. The results of hand gestures recognized from the real-time video
input are communicated to control the activities in Google Chrome browser. The five
different gestures are used to control the activities in Google Chrome. The system
is able to open applications like Facebook, Gmail, Youtube, Yahoo, and Tutorials
point in Google Chrome. In the future, more activities can be implemented using
combinations of several gestures.
References
1. Sarkar AR, Sanyal G, Majumder S (2013) Hand gesture recognition systems: a survey. Int J
Comput Appl (0975–8887) 71(15)
2. Badi HS, Sabah Hussein (2014) Hand posture and gesture recognition technology. Neural
Comput Appl 25(3):871–878
3. Khan RZ, Ibraheem NA (2012) Comparative study of hand gesture recognition system. In: Pro-
ceedings of international conference of advanced computer science and information technology
(CS&IT), vol. 2, no. 3, pp 203–213
4. Khan RZ, Ibraheem NA (2012) Hand gesture recognition: a literature review. Int J Artif Intell
Appl (IJAIA) 3(4):161–174
5. Cheng H-T, Chen AM, Razdan A, Buller E (2011) Contactless gesture recognition for mobile
devices. In: MIAA 2011, Palo Alto, CA, USA, 13 February 2011
6. Lee HC, Shih CY, Lin TM (2013) Computer-vision based hand gesture recognition and its
application in Iphone. In: Pan JS, Yang CN, Lin CC (eds) Advances in intelligent systems and
applications. Smart innovation, systems and technologies, vol 2. Springer, Berlin, Heidelberg
7. Saxena A, Jain DK, Singhal A (2014) Hand gesture recognition using an android device. In:
2014 IEEE fourth international conference on communication systems and network technolo-
gies, pp 819–822
8. Joshi TJ, Kumar S, Tarapore NZ, Mohile V (2015) Static hand gesture recognition using an
android device. Int J Comput Appl (0975–8887) 120(21):48–53
9. An J-H, Hong KS (2011) Finger gesture-based mobile user interface using a rear facing camera.
In: IEEE international conference on consumer electronics (ICCE), pp 303–304
10. Lahiani H, Elleuch M, Kherallah M (2016) Real time static hand gesture recognition system
for mobile devices. J Inf Assur Secur 11(2):67–76. ISSN: 1554-1010
11. Lahiani H, Kherallah M, Neji M (2015) Real time hand gesture recognition system for
android devices. In: 15th international conference on intelligent systems design and appli-
cations (ISDA), Morocco 14–16 Dec 2015
12. Dehankar AV, Jain S, Thakare VM (2016) Static hand gesture recognition using accurate end
point identification (AEPI) method. Int J Control Theory Appl 9(43):169–178
13. Dehankar AV, Jain S, Thakare VM (2017) Detecting centroid for hand gesture recognition
using morphological computations. In: ICISC 2017, Coimbtore, India, 19–20 January 2017
586 A. V. Dehankar et al.
14. Dehankar AV, Jain S, Thakare VM (2017) Using AEPI method for hand gesture recognition
in varying background and blurred images. In: ICECA 2017, Coimbtore, India, 20–22 April
2017
15. Dehankar AV, Jain S, Thakare VM (2017) Performance analysis of accurate end point identi-
fication method of static hand gesture recognition. In: ICCUBEA 2017, 17–18 August 2017
16. Dehankar AV, Jain S, Thakare VM (2016) Dynamic hand gesture recognition system using real
time end point identification method. Int J Control Theory Appl 9(43):161–168
Design and Implementation of MEMS
Baseless Mouse
V. Durga Bhavani, D. Indra Jagadeesh, K. Girija Sravani, P. Ashok Kumar,
Koushik Guha and K. Srinivasa Rao
Abstract With advancements in the technology from day-to-day life, the present
world is advancing towards IOT. By using MEMS sensor for IOT results in tremen-
dous performance and flexibility, MEMS sensors are most compatible devices. In
this paper, we have developed a baseless mouse using this technology. The imple-
mentation is done by monitoring of pitch and roll angle deflections of the MEMS
sensor ADXL345 with IoT device (NODE MCU). This setup is connected to a PC
or any other mobile devices using Wi-Fi to transmit the data. At the receiver end,
the data values are analyzed using Python tool and the corresponding response is
generated by varying cursor’s pixel values.
1 Introduction
In the present days, a normal mouse cannot sense without a base. Even wireless optical
mouse also requires the base for moving a cursor. With the advanced technology,
every device is getting mobilized. Mouse applications are already performed [1] using
Arduino, ADXL335/330 accelerometers, and flux sensors but it is wired mouse. In
the same way, Arduino-based wireless mouse was designed [2] using MMA7260
accelerometer. Using smartphone’s inbuilt accelerometer [3], mouse application is
performed. Till now, those Arduino based are not coming into existence because of
its size and complexity. There are some IoT devices like NODE MCU which can
be used for the applications like mouse operation, UAV (Unmanned Area Vehicle),
and gaming [4–6]. So, we developed a wireless mouse (software and hardware)
which does not require any base or platform for its operation, using NODEMCU and
ADXL345 (MEMS three-axis accelerometer and gyro).
This type of mouse can be connected remotely with ESP8266 (Wi-Fi module) and
transfer sensor data (ADXL345) to mobile devices. This raw data is analyzed and
the response will be in the form of movement of a cursor. This analysis and operation
are performed using MicroPython.
2 Experimental Design
ded with the MicroPython, which analyses sensor value, converts it into the byte
stream, and transmits to mobile devices using ESP8266. Due to its size and wireless
capabilities, it gives a wide range of flexibilities in different applications.
The internal register map is configured for the I2C protocol to transmit data to
NODEMCU (processor). I2C is a two-line communication protocol, where one line
is for serial clock and another line is for serial data with a common ground. The two
lines SDA and SCL are connected between sensor and NODEMCU with a clock
speed of 100 kHz. The device structure is shown in Fig. 4.
Design and Implementation of MEMS Baseless Mouse 591
From the datasheet of the sensor, it is known that the sensor address for I2C
communication is 0 × 53, which is preprogrammed in NODEMCU. After receiving
the data, NODEMCU will convert the data into bytecode (uft-8) and transmitted to
PC through ESP8266. Here, ESP8266 establishes UDP client–server protocol, in
which bytecode data is transmitted between devices wirelessly through the internet.
When this byte-coded data received by PC, it is decoded to integer values. These
values are analyzed by the Python tool using PYAUTOGUI and produce proportional
displacements in the cursor pixel values.
592 V. Durga Bhavani et al.
To find appropriate resolution and g value, we have interfaced the sensor with
Arduino UNO, using serial communication in the Arduino IDE at a baud rate of
9600 bytes/second. These connections are mentioned in Fig. 5.
This plotting is performed using serial plotter tool in Arduino IDE. The serial plot-
ter in the Arduino UNO IDE gives the deviation in X, Y, and Z with 8-bit resolution.
Therefore, the values will be varying between 0 and 255 as shown in Fig. 6.
To find the response of the sensor, we have plotted angle deviation with respect
to the physical motion of sensor at different time instants. By programming Eqs. 1
and 2, the deflections in pitch and roll axis are calculated with respect to X, Y, and
Z deviations. These deflections in angle are visualized using serial plotted as shown
in Fig. 7.
By varying the pitch and roll axis, the results in the serial monitor is analyzed
practically, comparing with the reference angles. The results are shown in Fig. 8 with
PC-end analysis in Fig. 9.
By comparing the practical values with the real values, error in pitch angle and roll
angles are calculated and represented in Table 1.
Design and Implementation of MEMS Baseless Mouse 593
MCU is connected to the PC and establishes the connection. Then, the PC is ready
to respond according to the motion of the baseless mouse.
4 Conclusions
In this paper, we have implemented a mouse without any base, using MEMS-based
MEMS accelerometer sensor. This is performed using ESP8266 which is a low power
and compatible device. It consumes less space and gives high performance. Due to
Design and Implementation of MEMS Baseless Mouse 595
its wireless capabilities, it provides a good user experience for many applications
like gaming, stands usage of the mouse without any base, motion detection, etc. This
device can withstand a harsh environment and is robust due to its linearity. This novel
design having a flexibility of facing future challenges.
Acknowledgements The authors would like to thank SERB (Science Engineering Research
Board), Govt. of India, New Delhi, for providing partial financial support to carry out this research
work under ECRA Scheme (File No: SERB/ECR/2016/000757).
References
1. Huang W-C, Hou H-W, Fang W-C (2016) A remote control solution for mouse cursor of computer
by using accelerometer. In: IEEE 17th international symposium on consumer electronics (ISCE)
2. Berlia R, Santosh P (2014) Mouse brace a convenient computer mouse using accelerometer,
flex sensors and microcontroller. In: International conference on contemporary computing and
informatics (IC3I)
3. Ahmed S, Zubair MA, Shaik IB (2015) Accelerometer based wireless air mouse using Arduino
micro-controller board. In: Proceedings of global conference on communication technologies
(GCCT)
4. Li Q, Cao H, Lu Y, Yan H, Li T (2016) Controlling non-touch screens as touch screens using
Airpen, a writing tool with in-air gesturing mode. In: International symposium on system and
software reliability
5. Dutka V, Starychenko S, Melnyk M, Kernytskyy A (2016) Usage of acceleration and angle
of rotation of hand for wireless control of computer. In: MEMSTECH, Polyana-Svalyava
(Zakarpattya), UKRAINE
6. Vatavu RD (2012) User-defined gestures for free-hand tv control. In: European conference on
interactive TV and video
A Secure Video Watermarking Approach
Using CRT Theorem in DCT Domain
1 Introduction
Digital data authentication has become a great challenge due to the rapid development
of computer and internet technologies. A large amount of digital data is present on
the web, and this data can be accessed and can be tampered by anyone through
processing tools like Photoshop and video editors. The protection of authenticity of
these digital files is a challenging task which can be accomplished with the use of
high secure watermarking techniques [1].
Video broadcasting with utmost quality is of great interest over DVB-2 and Inter-
net. However, most of the data which is broadcasted is not authenticated and it is
distributed without any protection. To protect such type of data content invisible,
the video has to be authenticated. In recent times, the digital video watermarking is
becoming more popular because of its effective approach to copyright and protect
this valuable data.
Today, this video watermarking will throw a demanding, challenging task for
many of the researchers. Hiding the important or some data is termed as data hiding
which is meant to embed some data into the host domain (any digital image, video,
and sound). Plenty of research was conducted on digital image data hiding scheme,
whereas the modern devices are more prone to the video content due to the increasing
demand of internet services, video data hiding has got the extra potential for many
business application and very low in a number of research methods were proposed
to protect the video content. Due to the exhaustive nature of the video, it is very
much preferable to alter the components in transform domain rather than doing it in
the spatial domain. Hence, the video watermarking approaches are categorized into
frame-based data hiding methods and transform domain-based schemes [2]. In this
paper, one such approach is described in the transform domain.
A secure and more robust video authentication approach is proposed in this paper,
which is robust against the transmission noises and attacks. The paper is organized as,
Sect. 1 presents the introduction to the watermarking and its importance in the current
research. Section 2 presents the related work done earlier by different researchers,
Sect. 3 presents the basic concepts of DCT and CRT which are involved in the pro-
posed framework. The procedures of key frame selection, an algorithm of embedding
were also discussed. Section 4 presents the experimental results obtained with this
approach and also presents the effect of attacks with various noises and filtering
approaches for this framework.
2 Related Works
In recent years, many video watermarking algorithms were planned to implant vig-
orous watermark in videos. Scores of them center on the vigor of general signal pro-
cessing and these attacks are mainly categorized as geometric attacks like rotation,
scaling, and cropping. Filtering attacks like low-pass filtering, average and median
filtering, and also compression attack in transform domain like JPEG compression
[3, 4].
A Secure Video Watermarking Approach … 599
Hiding the data in the video sequences is performed either in bitstream level or the
data level. In the bitstream level approach, the redundancies in the compression model
are explored which is very useful for alterations leading to a good scope of hiding
the data. But this type of hiding schemes is fragile and is used for authentication.
In the data level, attacks are more robust so they are used for the broader range of
applications.
In Kim et al. [5], embed watermark bits are embedded in the frequency domain
as pseudo-random sequences. In Langelaar et al. [6], hide watermarks by removing
or retaining chosen DCT coefficients. In Kapotas et al. [7], explored the redundant
blocks for the selection of H.264 encoding. In Wong et al. [8], tried to alter the
quantization matrix of the DCT coefficients in bitstream level. In Sarkar et al. [9],
proposed an application of quantization index modulation to alter the low-frequency
coefficients which are very suitable for hiding a large amount of data. In Liu et al.
[10], proposed a 3D DWT domain-based data hiding scheme where the LL sub-
band is used for data hiding with the use of BCH codes to increase error correction
capability.
3 Proposed Frameworks
An Mp4 video file is taken as an input video file whose frame rate is 25 frames per
second. Let “nf” be the number of frames in the video sequence. This frame selection
algorithm is based on the calculation of entropy of each individual frame and the
frames with minimum, maximum, and mean values are indexed and selected for
embedding process. Figure 1 is the building block of the video sequence, and it can
be observed that “nf” number of frames is present in a single shot. In this analysis,
only three frames are selected for embedding the data. The entropy of the frame is
calculated as
Ef − p(i, j)log ( p(i, j))
i1 j1
where p(i, j) is the probability and “f” is the number of frames. Frames are selected
which have max(E f ), min(E f ), and mean(E f ) values.
600 A. M. U. Wagdarikar et al.
Video Sequence
Scene Scene
So far, many researchers have used this theorem in watermarking methods, and its
main intention is to provide security by selecting a set of relatively prime numbers.
Let the “r” be the integers represented as µ {M1 , M2 , . . . , Mr }, so that two Mi
are relatively prime.
Z Ri (mod Ri )
where “R” is called residue and the solution for “Z” is obtained as
r
M
Z Ri Ki .
i1
Mi
Watermark
The proposed framework of embedding the data into video sequence is based on
CRT in DCT domain. This approach is robust against noise attacks, and three dif-
ferent types of noises like speckle noise, Gaussian, and salt-and-pepper noises were
validated with different video streams. The results are tabulated in the next section.
Figure 2 shows a basic block diagram of the proposed algorithm.
4 Experimental Results
The proposed approach is tested and evaluated with different video sequences avail-
able at [12]. One of the key important features of this approach is to locate the
position to embed the data in the cover frame data since 8 × 8 block size is used for
602 A. M. U. Wagdarikar et al.
Fig. 3 a Without speckle noise, b with speckle noise of value 0.01, c with speckle noise of value
0.02, d with speckle noise of value 0.03, e–h extracted watermarks
DCT decomposition and a total of 64 DCT coefficients are obtained. This contains
DC and AC components as it is known from [13] that DC components have most of
the energy and information of the block, so the AC coefficients are selected for data
hiding in a zigzag manner. Figures 3, 4, and 5 shows the result for M1 6 and M2
7,
The experiment results show that the present approach is robust under noise and
filtering attacks with a PSNR varying from 28 to 22 at 0.01 noise density and also
48.31, 49.21 for median and Wiener filtering attacks which is shown in Tables 1 and 2.
A Secure Video Watermarking Approach … 603
Fig. 4 a Without salt and noise, b with noise density 0.01, c with noise density 0.02, d with noise
density 0.03, e–h extracted watermarks
This result also presents the evidence of imperceptibility with normalization coef-
ficient not falling less than 0.8 which is a good achievement. Performance analysis
of the proposed approach in terms of PSNR and NC for various noises is shown in
Fig. 6.
604 A. M. U. Wagdarikar et al.
Fig. 5 a Without Gaussian, b with noise density 0.01, c with noise density 0.02, d with noise
density 0.03, e–h extracted watermarks
Fig. 6 Performance analysis of the proposed approach in terms of PSNR and NC a and b for salt
and pepper, c and d for Gaussian noise, e and f for speckle noise
606 A. M. U. Wagdarikar et al.
5 Conclusions
Simpler and secure robust video watermarking approach is proposed in this paper, this
approach provides more security with the use of CRT and it is more imperceptible and
robust due to the modification of the AC components of DCT coefficients. Presently,
this work is performed on grayscale video and further, this extended under different
color transformations with more secure features. From the experimental analysis, it
is evident that this approach is not only robust but also can be used for high-quality
digital video transmission as the NC values are quite satisfactory.
References
1. Podilchuk I, Delp EJ (2001) Digital watermarking: algorithms and applications. IEEE Signal
Process Mag 18(4):33–46
2. Lin ET, Eskicioglu AM, Lagendijk RL, Delp EJ (2005) Advances in digital video content pro-
tection. Proc IEEE (Special Issue on Advances in Video Coding and Delivery) 93(1):171–183
3. Hartung F, Girod B (1997) Fast public-key water-marking of compressed video. In: Proceedings
of IEEE international conference on image processing, vol 1, Oct 1997, pp 528–531
4. Langelaar GC, Lagendijk RL, Biemond J (1999) Watermarking by DCT coefficient removal:
a statistical approach to optimal parameter settings. In: Proceeding of SPIE symposium of
security and watermarking of multimedia contents, pp 2–13
5. Kim WG et al (1999) An image watermarking scheme with hidden signatures. In: Proceedings
of IEEE international conference on image processing, vol 2, Oct 1999, pp 205–210
6. Langelaar GC, Lagendijk RL, Biemond J (1999) Watermarking by DCT coefficient removal:
a statistical approach to optimal parameter settings. In Proceedings of SPIE symposium of
security and watermarking of multimedia contents, pp 2–13
7. Kapotas SK, Varsaki EE, Skodras AN (2007) Data hiding in H-264 encoded video sequences.
In: Proceedings of the IEEE 9th workshop multimedia signal processing, Oct 2007
8. Wong K, Tanaka K, Takagi K, Nakajima Y (2009) Complete video quality-preserving data
hiding. IEEE Trans Circuits Syst Video Technol 19(10):1499–1512
9. Sarkar A, Madhow U, Chandrasekaran S, Manjunath BS (2007) Adaptive MPEG-2 video
data hiding scheme. In: Proceedings of the 9th SPIE security steganography watermarking
multimedia contents, pp 373–376
10. Liu H, Huang J, Shi YQ (2005) DWT-based video data hiding robust to MPEG compression
and frame loss. Int J Image Graph 5(1):111–134
11. Patra JC, Karthik A, Bornand C (2010) A novel CRT-based watermarking technique for authen-
tication of multimedia contents. Digital Signal Process 442–453
12. http://trace.eas.asu.edu/
13. Rao KR, Hwang JJ (1996) Techniques and standards of image, video and audio coding. Prentice
Hall
Classification and Suppression of Noises
in Fetal Heart Rate Monitoring: A Survey
Abstract Fetal heart rate (FHR) monitoring helps for detecting the fetal health status
and intimates the decision for operative delivery to doctors at the earliest. Though
it has got a greater significance, it poses extreme challenges in diagnosing the exact
health condition because of noise intrusion either by internal or external sources.
Irrespective of the sources, the noises can be classified based on their frequency
of occurrence and amplitude. Hence, from the fetal heart rate information obtained
from the abdomen of the mother, a simple SVM-based classification is done to
distinguish the normal and the abnormal fetal heart rate. Later based on the type of
noise interpreted, a suitable programmable filter technique is reviewed to remove
unwanted information by varying the filter coefficients. A comprehensive set of
features are chosen from MIT-BIH Arrhythmia Database. The analysis is carried
over for each feature independent of the rest, and then it is generically continued
by the automatic selection of features. The obtained results are classified based on
similarities in features and spectrum. The simulations are performed using MATLAB
and ModelSim. The area and timing analysis is evoked using Xilinx ISE.
Keywords Fetal heart rate (FHR) monitoring · SVM classifier · Noise sources
Programmable filter
1 Introduction
Congenital heart defects may occur due to various factors such as environmental
hindrances, genetic syndrome, or inherited disorders which are to be clinically mon-
itored by Fetal ECG (FECG) signals before a baby is born. Therefore, it is the prime
challenge for the doctors to diagnose the problem before it exploits the health of
fetus or mother. The QRS wave of the ECG signal gives the precise heart rate of the
abdominal ECG (AECG) signal. The detection can be done by placing electrodes
on the maternal abdomen in either of two methods such as (i) Auscultation—peri-
odically listening to fetal heart rate using stethoscope or Doppler transducer and (ii)
electronic fetal monitoring instruments which involves external and internal methods
of recording heartbeat of fetus and contractions of the uterus.
An electronic transducer is used for monitoring the fetal heart rate internally on
connecting with the fetal skin. Usually, a monitor is used for diagnosing the heart
rate by attaching a wire electrode with the fetal body part or scalp, and hence this is
otherwise called as scalp or spiral electrode. Since it is directly connected with the
fetus and also the movement does not disturb the heart rate monitoring, this gives a
precise and persistent transmission over the external monitoring. Such monitoring
is used in situations wherever the external monitoring of heart rate is not a reliable
solution.
A fetoscope is used to listen to the fetal heart rate externally and record the signal
from the abdomen of the mother. An alternative to this monitoring is an ultrasound
Doppler device. It is used for fetal heart rate during prenatal visits. The device being
a Doppler or fetoscope the purpose of monitoring is to aid the doctors in measuring
the heart rate at prescribed intervals or consecutive periods of time during labor.
As previously discussed in the internal monitoring, the measured heart rate is being
transferred to the monitor for analysis through the transducers which could be printed
onto a graph for future reference.
The most commonly used FECG monitoring technique is the noninvasive or the
external fetal monitoring until or otherwise an emergency is awaited during labor.
There are various sources of noise providing hindrances during diagnosis of such
abnormalities, few of which will include (i) the inclusion of the maternal ECG signal
with the fetal ECG, (ii) interferences of power line sources, (iii) muscular movements,
(iv) pattern of respiration, (v) resistance of skin, and (vi) device noise.
Classification and Suppression of Noises … 609
The frequency of the maternal signal ranges about 1.16–1.35 Hz, whereas fetal
heart varies at a specific period, approximately it is about ranges between 2 and
2.66 Hz. The power spectra are quite often perceived around the frequencies ranging
from 1 to 20 Hz for different arrhythmia [1]. As the signal amplitude decreases,
there will be a drastic increase in frequency above 12 Hz and suddenly vanishes.
The frequency components are chosen to be between 1 and 12 Hz if the value of n is
selected to be 12 for ECG beat identification occurring at multiple instances. Such
frequency range is undisturbed by either high- or low-frequency components between
20 Hz and less than 1 Hz, respectively, irrespective of power- line interference or
respiration and baseline drift.
In Sect. 2, a brief review of the SVM classification and the prevailing methods to
remove various types of noises using programmable filter methods are discussed fol-
lowed by the implementation and simulation results of the algorithms. The inference
regarding noise classification and elimination is reviewed and concluded in Sect. 4.
2 Existing Methodologies
The most common technique used for fetal health monitoring is cardiotocography
to visually analyze the heart rate of the fetus [2]. Though there are various other
interpretation techniques, the final analysis is usually done by the doctors based on
the heart rate. An empirical mode of decomposition is combined with the support
vector machine to classify different noises from fetus heart rate recordings with
deviation from healthy stampings based on the interference of noise intruded due to
internal or external sources [3]. The 15 samples were subjected to the monitoring
of fetus heart rate at a sampling rate of 4 Hz and around 90 records were selected
randomly which were recorded over a duration of 20 min. All the datasets were
verified and rated as normal or abnormal by the doctors. Two types of datasets such
as training set with 60 samples and testing set with 30 are chosen. Further, the support
vector machine classifies the samples of fetal heart rate by the standard deviation of
EMD set as input features to the classifier (Fig. 1).
Data processing: With the FHR, data were collected by placing electrodes on the
abdomen of mother and the following database consisting of dilation and second
stage of labor was collected over 1200 recordings. Further, the quality of information
data quality requirements such as signal degradation, sample rate and the number
of false positive, true positive, and true negative is also taken into consideration.
The classification has identified almost five different noise sources [4] leading to
misinterpretation of data as shown in Table 1.
610 D. Preethi and R. S. Valarmathi
Signal pre-processing
Pulse detection
Feature extraction
Classifier performance
The FHR recordings of various time lengths were analyzed, and the least time length
segments were of prime importance to avoid time bias [5]. Hence, the major per-
formance and reliability of the system rely on the proper selection of features and
their classification. Since it is a continuous process, the selected features will be kept
on multiplying during the evolution phase and the feature with appropriate informa-
tion conveys the end of the constructed feature. The information’s gathered could be
morphological features, from different domains such as time domain or frequency
domain. During frequency analysis, the spectrum of energy bands is assumed to
reflect the functions of the nervous system [6].
Classification and Suppression of Noises … 611
Fig. 2 Representation of
adaptive filter
An adaptive filter is a linearly varying filter controlled by parameters that vary accord-
ing to the algorithm of optimization that is being used as shown in Fig. 2. Since these
filters are variable, it has to be recorded at any instant of time [7]. Adaptive filters are
used for certain applications where few parameters of the required processing are not
known in prior or tend to be changing. The transfer function is altered by the closed
feedback loop. This filter always produces an optimum performance on modifying
the variable parameters of transfer function so as to minimize the cost function for
next iteration [8]. The array multiplier has been replaced by Booth multiplier for
efficient implementation [9].
Booth multiplier: Booth multiplier is used for multiplying two signed binary
numbers in two’s complement form. It determines the neighboring bit pairs of the
N-bit multiplier Z in signed 2’s complement [10], inclusive of LSB, z − 1 0. For
each bit zi, for i traversing from 0 to N − 1, the bits zi and zi − 1 are taken. Whenever
those two bits are equal while traversal, product accumulator assigned as P is left
unchanged. If zi 0 and zi − 1 1, then the multiplicand times 2i is added with P. In
those places where zi 1 and zi − 1 0, then the multiplicand times 2i is subtracted
from product accumulator. The value is presented in the product accumulator after
such comparisons of the final solution [9].
MATLAB: Filter Design and Analysis Tool (FDA Tool) used as an effective user
interface for designing and analyzing the filters such as FIR or IIR on assigning
the digital filter specifications by manipulating the poles and zeros derived from
MATLAB. It also provides various other platforms for analyzing magnitude response
and phase response of filters.
As shown in Fig. 3, the required filters are designed and the coefficients of filters
are obtained based on the required algorithms. FIR filter using least mean square
algorithm and the equiripple algorithm has been designed for different pass frequen-
612 D. Preethi and R. S. Valarmathi
cies. The coefficients taken using FDA tool for different specifications are used in
our VHDL code directly for designing filters by allocating memory for storage. And
the rest of the filter design including multiplier blocks and the delay is designed.
ModelSim—Intel FPGA 10.5b: VHDL simulation environment is used in par
with or Xilinx ISE to simulate the noise classified in MATLAB using FDA tool. It is
normally used for writing codes that describe the functionality or structure of a logic
circuit and often requires a synthesis. Since ModelSim is being used generically for
precise and consistent logic description with the timing specifications written in a
text module called test bench, simulation of noise cancelation using FIR algorithms
is carried out in this work. The noise models classified using SVM are simulated by
the specifications provided by the user and are tested upon the samples collected by
external monitoring for noise or interference removal.
ISE Design Suite 14.2: Xilinx ISE is a software tool to synthesize the designs
and perform area, power, and timing analysis. Since the Xilinx has the capacity to
handle complex algorithms faster than other competing programs with high extent of
logic density, it is widely preferred for a reduced time of execution and installation
and maintenance cost. Also, it is opted for analyzing the optimization parameters
such as area, power, and time (Fig. 4).
Classification and Suppression of Noises … 613
Step 1: Coefficients of the fetal signal with noises taken from the database collected
in text document are read in our VHDL code as input.
Step 2: For the required specifications, generate filter coefficients using FDA TOOL
in MATLAB.
Step 3: Filters are designed in VHDL which consist of multipliers, delay element,
and adder that uses the above coefficients.
Step 4: Compile and synthesize the code.
Step 5: Filtered output for given input using different algorithms is generated and
the efficiency is calculated using ModelSim as shown in Fig. 5.
Step 6: Power efficiency, area consumption, and time are analyzed for the given
design using Xilinx ISE Design suite.
Fig. 6 a Multiband filter design using FDA tool, b adaptive multiband filter output
Adaptive Multiband Filter: The adaptive filter provides an efficient way of filtering,
in addition to that multiband increases the filtration. As the human ECG has different
frequencies, multiband with different frequency ranges gives the accurate output.
From the above Fig. 6, (a) Multiband FIR filter uses multiple passbands and the
equiripple design produces the most efficient filters that can implement a design of
the same specification with the least number of the coefficient; (b) it is visible that the
noise is removed more efficiently compared to all other filters dealt with. Whenever
the number of levels and passbands increased, the more effectively the filtration takes
place. The first level output signal generated is used as an input for the second-level
filtration. Hence, the resultant has clear and accurate peaks of required ECG.
Classification and Suppression of Noises … 615
4 Conclusion
This work on classification and suppression of noises in fetal heart rate monitoring
carried over for the dataset from MIT-BIH Arrhythmia Database has proven to match
with the actual FECG signals with less area of over 12% (400 registers) and execu-
tion time of 3.7 ns for the peak-to-peak amplitude. There are few factors which make
the performance of the monitoring satisfactory owing to the automatic selection of
features that are not sustainable and varying entropy rates. Since these factors are
almost hard to avoid between the two types of fetal heart rate monitoring procedures
due to unwanted signals caused by interferences, the programmable filters with vari-
able coefficient have combated the performance. The simulations are carried out with
almost 200 samples, and it has ensured the elimination of noise and extraction of the
desired features during the automatic selection. The future scope is to reduce and
power of the filters using approximation techniques rather than accuracy.
References
1 Introduction
In the wireless communication (WC), phase-locked loop (PLL) circuits are exten-
sively used in AM radio receivers, frequency demodulators, multipliers, dividers,
and a frequency synthesizer (FS). The frequency synthesizer is an essential part
of the contemporary electronic system, producing more than one frequency from
the frequency generator. In an information communications industry, the frequency
synthesizer is usually applied. Varieties of designs were proposed for various applica-
tions like low power, low noise, high frequency, and low area. In natural demodulator
for WC transceiver applications, PLL can also be used. ICs of WC are extensively
used in handheld devices, handy electronic gadgets, and GPS systems. Utmost these
devices are battery operated. Due to circuit complexity, the devices consume more
power and should have a high propagation delay [1, 2].
Power minimization techniques and frequency enhancement designs have been
proposed for PLLs [3–8]. PLL comprises a phase/frequency detector (PFD), a charge
pump with loop filter (CP), voltage-controlled oscillator (VCO), and a frequency
divider (FD). In this paper, we concentrate to diminish the power of the PLL and to
enhance its frequency with the use of the programmable frequency divider. Different
circuit techniques are used to minimize PLL power consumption. Programmable
frequency dividers are crucial components in FS. Their phase noise may limit the
in-band phase noise performance of the PLL. The architecture of the PLL is as shown
in Fig. 1.
PFD is a critical element in PLL design. PFD is used to analyze and control the phase
of two sources and forces one to lock into another through control signals.
To minimize the power of PLL, different PFD structures as shown in Fig. 2 are
adopted in which a number of MOS components are concentrated to optimize. An
analysis is performed for the purpose of low power and high speed of the PFD. The
output of the PFD is shown in Fig. 3.
3 Charge Pump
CP receives the signals from the PFD and charges or discharges the loop filter (LF)
accordingly. The UP and DOWN currents vary as the voltage on the LF changes
owing to the channel-length modulation effect. The major problem of the PLL design
which leads to spurs are due to mismatch current in the UP and DOWN of the CP.
Design of Low-Power and High-Frequency PLL … 619
(a) PFD with AND gate (b) PFD with NAND gate
(c) PFD with XOR gate (d) PFD with Complimented input
NAND gate
The reason for this is due to mismatches of the MOS components, switching transient
mismatch current sources, and finite output resistance. These issues impart to device
size and layout [9]. To avoid the mismatching which causes the spur of the PLL,
different CP designs are synthesized to evaluate it. From the literature, different CP
designs are as shown in Fig. 4.
620 S. Anchula et al.
(a) Charge Pump with passive LF (b) Position of switches in charge pump
4 Voltage-Controlled Oscillator
The phase noise of a PLL is the result of phase noise of the VCO. Compared to ring
VCOs, LC-VCOs are chosen to design PLLs because of performance domination on
phase noise. However, the PLL phase noise usually depends on tuning frequency of
the oscillator (KVCO ) gain. Precisely, the increase of KVCO , any ripple at the input of
the VCO, renders to superior variations in the frequency of its signal at the output
and consequently to upsurge the noise in the phase of the PLL [10, 11]. As a result,
a negative effect can be imparted to its functioning and constancy. The VCO with
negative transconductance and mutual negative resistance is as shown in Fig. 5.
Design of Low-Power and High-Frequency PLL … 621
Single-phase-clock strategies create good features. The usage of these can have
clock sharing on the chip and lowers the MOS transistors. Therefore, the high fre-
quency with simple designs can be attained. The extended true-single-phase-clock
(E-TSPC) CMOS circuit holds organization rules for single-phase circuits with com-
plementary static, dynamic, data pre-charged, latch, and NMOS like blocks [15].
The prominence of the E-TSPC design as shown in Fig. 8 is to build circuit data
rates with twice the clock rate. These designs are made from n and p data-chain
connections, which cause to reduce power or enhance the speed of the circuits.
Design of Low-Power and High-Frequency PLL … 623
Phase-locked loop blocks are constructed with various designs in the optimization
of power and delay for wireless communication applications. All the designs are
implemented in Tanner EDA 16.0, and results are obtained using T-Spice. PFD
using AND gate consumes less power 52 µW among other designs such as PFD with
NOR/XOR. PFD with XOR design consumes more power, but the propagation delay
is 1.088 ns. Different charge pumps are carried out to know their power efficiency.
CP using current steering circuit consumes less power 10.48 µW and shows small
delay 10.74 ns. Frequency divider using GDI consumes less power 0.772 mW and
produces small delay 2.248 ns compared to TG-CMOS based FD. At each block
level to obtain the logic levels, W/L aspect ratio is mandatory in PLL. The variation
of W/L ratio decreases the level of the output. In such case, TG-CMOS are the best
selection to obtain the highest level of the output and for better performance of the
PLL.
When these low-power designs are used to construct PLL with programmable
frequency divider, it shows small power consumption and high speed with GDI tech-
nique. PLL with the use of GDI technique consumed an average power of 13.5 µW
and with a small delay of 2.862 ns. The programmable frequency divider enhances
the speed of the PLL to operate it with a frequency of more than 1 GHz. E-TSPC
designs for FD and GDI technique for 4-1 MUX lower the average power consump-
tion of the programmable frequency divider. The programmable FD is implemented
optimized 4-1 MUX. The average power dissipation and propagation delay of the
circuits are listed in Table 1. The results of the PLL using programmable frequency
divider with GDI technique are as shown in Fig. 9. All the designs are simulated at
2.0 V using standard 180 nm technology.
624 S. Anchula et al.
Table 1 Average power and propagation delay of different circuit designs for PLL
Design name Power Delay
PFD using AND gate 52.4 µW 5.063 ns
PFD using NOR gate 57.5 µW 5.408 ns
PFD using XOR gate 67.4 µW 1.088 ns
PFD using NAND gate 48.5 µW 7.630 ps
CP using passive LPF 0.135 mW 5.067 ns
CP using current steering 10.48 µW 10.74 ns
CP using NMOS switch only 12.1 µW 10.74 ns
PFD and CP 0.146 mW 5.067 ns
FD using TG 0.854 mW 9.132 ns
FD using GDI 0.772 mW 2.248 ns
MUX using TG used in FD 8.94 µW 911.9 ns
MUX using GDI used in FD 3.50 µW 559.8 ns
VCO 25.0 µW 151.5 ns
PLL using TG 15.1 µW 4.763 ns
PLL using GDI 13.5 µW 2.862 ns
DIV 16 used in FD 89.0 µW 161.5 ns
7 Summary
with programmable FD having GDI technique consumes less power and operated at
high frequency. Further, transceiver, frequency synthesizer, etc. can be constructed
with the use of a PLL for wireless communication.
References
1. Li B, Zhai Y, Yang B, Salter T, Peckerar M, Goldsman N (2011) Ultra low power phase detector
and phase-locked loop designs and their application as a receiver. Microelectr. J. 42:358–364
2. Gao Z, Xu Y, Sun P, Yao E, Hu Y (2010) A programmable high speed pulse swallow divide-
by-N frequency divider for PLL frequency synthesizer. In: 2010 international conference on
computer application and system modeling (ICCASM 2010), pp v315–v318
3. Pellerano S, Levantino S, Samori C, Lacaita AL (2004) A 13.5-mW 5-GHz frequency synthe-
sizer with dynamic-logic frequency divider. IEEE J Solid State Circuits 39(2):378–383
4. Ergintav A, Herzel F, Borngraeber J, JalliNg H, Kissinger D (2017) Low-power and low-noise
programmable frequency dividers in a 130 nm SiGe BiCMOS technology. In: IEEE conference
proceedings, pp 105–108
5. Razavi B, Lee KF, Yan RH (1995) Design of high-speed, low-power frequency dividers and
phase-locked loops in deep submicron CMOS. IEEE J Solid-State Circuits 30(2):101–109
6. Hafizi M (1997) High-frequency low-power IC’s in a scaled submicrometer HBT technology.
IEEE Trans Microw Theory Tech 45(12):2541–2554
7. Arakali A, Gondi S, Hanumolu PK (2009) Low-power supply regulation techniques for ring
oscillators in phase-locked loops using a split-tuned architecture. IEEE J Solid-State Circuits
44(8):2169–2181
8. Mahalingam N, Wang Y, Thangarasu BK, Ma K, Yeo KS (2017) A 30-GHz power-efficient
PLL frequency synthesizer for 60-GHz applications. IEEE Trans Microw Theory Tech
65(11):4165–4175
9. Wey T (2005) A circuit technique to improve phase-locked loop charge pump current matching.
In: IEEE conference proceedings
10. Moon Y-J, Roh Y-S, Jeong C-Y, Yoo C (2009) A 4.39–5.26 GHz LC-tank CMOS voltage-
controlled oscillator with small VCO-gain variation. IEEE Microw Wirel Compon Lett
19:524–526
11. Sánchez-Azqueta C, Aguirre J, Gimeno C, Aldea C, Celma S (2016) High-resolution wide-band
LC-VCO for reliable operation in phase-locked loops. Microelectr Reliab 63:251–255
12. Morgenshtein A, Fish A, Wagner IA (2002) Gate-diffusion input (GDI): a power-efficient
method for digital combinatorial circuits. IEEE Trans Very Large Scale Integr VLSI Syst
10(5):566–581
13. Jr Navarro SJ, Van Noije WAM (2002) Extended TSPC structures with double input/output
data throughput for gigahertz CMOS circuit design. IEEE Trans Very Large Scale Integr VLSI
Syst 10(3):301–308
14. Morgenshtein A, Fish A, Wagner IA (2002) Gate-diffusion input (GDI): a power-efficient
method for digital combinatorial circuits. IEEE Trans Very Large Scale Integr VLSI Syst
10(5):566–581
15. João Navarro S Jr, Van Noije WA (2002) Extended TSPC structures with double input/output
data throughput for gigahertz CMOS circuit design. IEEE Trans Very Large Scale Integr VLSI
Syst 10(3):301–308
Optimum LNA for WAVE Application
Abstract In this work, low-noise amplifier is designed for IEEE 802.11p WAVE
application considering 5.85–5.925 GHz frequency band with a center frequency
of 5.9 GHz. The amplifier is designed using GaAs FET ATF-36077 with feedback
stabilization technique using different components to improve the stability of the
potentially unstable device. It is observed that the power gain is more in feedback
network with series resistance and inductance than in other techniques which are
obtained as 16.917 dB. The noise figure obtained in the R–L series feedback technique
is 1.702 dB. The comparative analysis of these designs is discussed in this paper.
Keywords WAVE · GaAs FET · LNA · Matching · Noise figure · Power gain
1 Introduction
WLAN is one of the key technologies invented which works on various frequencies
allotted where 5.85–5.925 GHz with a center frequency of 5.9 GHz is dedicated for
WAVE (WLAN for Vehicular Environment). WAVE takes care of interaction between
high-speed vehicles, toll booths, and vehicle safety services. The advantages of this
vehicular WLAN are discussed [1]. Moreira et al. [2] designed low-noise amplifier
using concurrent BiFET cascade topology for 802.11a WLAN which is another
standard used for vehicular environment and obtained the noise figure of 3.7 dB
and gain of 13.8 dB. Lai and Lin [3] used CMOS-based current-reused cascade CS
topology for 802.11a LNA design and obtained an improved gain of 20.73 dB and
optimum noise figure of 3.08 dB. Pourmand et al. [4] designed a low-noise amplifier
using inductive neutralization cascade topology and obtained a noise figure of 2.9 dB
M. Iyer (B)
Pondicherry Central University, Kalapet 605014, Puducherry, India
e-mail: makwave.26791@gmail.com
T. Shanmuganantham (B)
Department of Electronics Engineering, Pondicherry University, Kalapet 605014, Puducherry,
India
e-mail: shanmugananthamster@gmail.com
and gain of 17.83 dB. A high gain wideband low-noise amplifier designed with
stagger tuning technique described in [5] obtained a noise figure of 4.36 dB with a
power gain of 15.43 dB. Zaini et al. [6] obtained the noise figure of 8 dB and gain of
16 dB for the FDSOI LNA designed with different topologies.
2 Design Aspects
The low-noise amplifier is designed using GaAs FET with feedback topology using
different passive components for improving the stability of the device and conclude
for a better feedback component in aspects of the noise factor, stability, and gain
of the amplifier. Low noise figure (NF) and high gain are required in an LNA for
WAVE application. Advanced Design System (ADS) simulation tool is used for
designing the low-noise amplifier. There are two different types of device libraries
available in the ADS software, namely, S-parameter library and RF Transistor library.
S-Parameter library works on fixed bias, i.e., these parameters of the device are fixed
for a particular bias point of the device. In this work, the S-parameter library device
is used.
In low-noise amplifier, MAG (maximum available gain) and NFmin (minimum
intrinsic noise) figure are the important parameters of active devices to be considered,
which in turn depends on the S parameters of the device. The S parameters determine
the stability criteria of the device at various biasing points. Theoretically, the stability
of the device is checked using the K – || test where K said to be Rollet’s stability
described in [7].
The condition for stability is that if K > 1, || < 1 and B +ve, then the device is
unconditionally stable and if K < 1, then device is potentially unstable. This condition
will tend the device to oscillate and the maximum gain that will be obtained is the
maximum stable gain (MSG) expressed as,
|S21 |
MSG (1)
|S12 |
If the device is unconditionally stable, i.e., K > 1, then the gain obtained will be
maximum available gain which is expressed as
|S21 |
MAG K ± K2 − 1 (2)
|S12 |
Different techniques are available to improve the stability of low-noise amplifier [8].
Connecting a feedback with a passive element between the terminals of the active
device (here GaAs FET) is one of the techniques by which stability of the device can
be improved. These techniques are as follows:
Optimum LNA for WAVE Application 629
• Connecting a feedback resistance between the gate and the drain terminal.
• Connecting an inductor and series resistor in feedback between gate and drain.
• Connecting a series resistor and capacitor between gate and drain.
In this work, each technique is implemented with GaAs FET device and a com-
parative study is done to determine the most stable and best noise immune amplifier.
3 LNA Designs
The device used in this work is a high range, low-noise GaAs FET ATF-36077 of
Agilent Technologies that is highly linear and provides excellent uniformity. The
circuits of GaAs FET-based LNAs are discussed further.
A resistor of 380 is connected as a feedback element between the gate and drain
terminal of GaAs FET to improve the stability of the device and convert it from a
potentially unstable device to an unconditionally stable device. The circuit design of
the corresponding low-noise amplifier is shown in Fig. 1. The stability factor rep-
Fig. 3 VSWR
Optimum LNA for WAVE Application 631
The next LNA design implemented consists of a feedback network with a resistor
of value 320 and an inductor of 10 nH value connected in series. The match-
ing networks are designed using distributed components in the form of microstrip
transmission lines which is shown in Fig. 8, respectively. The lumped components
are usually avoided for microwave circuit designs due to its parasitic effects which
disturb the operation of the amplifier.
The results of the R–L feedback LNA design are shown. The stability factor
represented as stabfact1 and delta (||) as stabmeas1 is obtained as 1.041 and 0.434,
respectively, is shown in Fig. 9. Figure 10 shows the VSWR, i.e., the voltage standing
wave ratio which is obtained as 1.004 at the input and 1.012 at the output side of the
amplifier.
The S parameters of the amplifier are shown in Figs. 11 and 12, respectively,
which signify that S11 is −53 dB, S21 is 16.917 dB, S12 is −19.388 dB, and S22 is
−44.349 dB.
The noise figure obtained for the R–L feedback LNA is 1.702 dB and the power
gain is 16.917 dB which is shown in Figs. 13 and 14, respectively. From the above
results, it is observed that the gain of the amplifier has increased to a great extent.
Fig. 10 VSWR
634 M. Iyer and T. Shanmuganantham
The results of the R–C feedback LNA design are shown in Fig. 15. Figure 16 shows
the stability factor represented as a stabfact1 parameter and delta (||) as stabmeas1
which is obtained as 1.152 and 0.664, respectively. Figure 17 shows the VSWR, i.e.,
the voltage standing wave ratio which is obtained as 1.011 at the input side and 1.007
at the output side of the amplifier.
Fig. 17 VSWR
The S parameters of the amplifier are shown in Figs. 18 and 19, respectively,
which signify that S11 is −45.578 dB, S21 is 11.435 dB, S12 is −16.171 dB, and S22
is −48.885 dB.
The noise figure obtained for the R–C feedback LNA is 1.833 dB and the power
gain is 11.435 dB which is shown in Figs. 20 and 21, respectively. This low-noise
amplifier has less gain and corresponding noise figure is more.
Optimum LNA for WAVE Application 637
4 Comparative Results
5 Conclusion
References
1 Introduction
In any soil management system, soil pore space is the major determinant of sustain-
ability in the field of work (either agrology or urbanization). It decides the microbial
growth of plant roots (agrology) and estimates the soil strength for the building
construction (urbanization) [1]. Porosity and void ratio are the two structures that
form the whole soil morphology system. Physical parameters of soil structure can
be obtained using fractal dimension (FD) [2, 3]. FDs are applied easily to quantify
the interfaces of solid and pore spaces in any image-coded structures. Porosity pro-
vides potential information about soil sample by the elevation of soil morphology
system using the particle size distribution [4] and is a key factor for all the chemical,
physical, biological properties, and volumetric parameters estimation.
Over the past decades, thresholding is a kind of segmentation strategy, which
reveals the number of objects present in the digital image. Two main types of thresh-
olding are global and local thresholding. In global thresholding, the objects are sim-
ply divorced into foreground and background, whereas in local thresholding, the
objects are divorced into a number of classes present in the digital image or the
user defined ranges [5, 6]. Soil images consist of only foreground (pore space) and
background (solid space) [7], so global thresholding alone will be sufficient. In gray
scale images, histogram-based thresholding uses knowledge from source informa-
tion based upon the peaks and valleys of grey levels. In real-time image processing,
the optimal thresholding values are taken in automatically. Otsu’s class variance-
based thresholding either maximizes interclass variance or minimizes the intraclass
variance. These class variances should be maximum at the edges of the foreground
and background objects. Johnsen and Bille have a better threshold selection method
for bi-model images based on homogeneity and figure of the piece. As well Sahoo
and Kapur developed entropy-based feature segmentation and it separates the edges
of foreground and background with minimum and maximum entropy of the classes
[5].
In bimodal thresholding, histogram peaks sharply separate the class variances
of foreground pixels and reduce into a binary image, as well same as energy of
class pixels separates the foreground edge pixels clearly. Otsu helps to have a clear
understanding about pixel variances in the neighborhood pixels. But Kapur’s entropy
clearly depicts the pixel energy variances in the edges of the foreground. Soil agrology
system can be effectively manipulated with the help of various types of thresholding,
which quantifies the porous space and solid space of the synthetic CT soil samples.
In this proposed method, which combines the concentration of both neighboring
and edge pixels of foreground classes, enhanced feature validation of ground truth
synthetic soil samples can be achieved.
Soil Porosity Analysis Using Combined Maximum Entropy … 643
From the last decades, various authors recommended the CT soil samples, as it
contains more information about objects in every sequence of frames [8]. It is easy
to collate both the 2D and 3D soil models. Among all the soil models, CT samples
contain a large amount of information. So, the analysis of soil model beyond two
dimensions can be done with ease. Specifications of CT soil samples include the
following: 24-bit depth RGB, 256 × 256 pixels, and its allowable soil pore in five
different ranges 1, 2, 3, 4, and 5. In range-1, the soil pores lie between 0 and 5.0%
of total image pixels. Similarly, the soil pore ranges 2, 3, 4, and 5 were shown in
Table 1.
From the vast survey, the authors found that using truncated multi-fractal method
simulation of soil CT image with the characterization of data samples can be done
[6, 9]. It works on gray scale simulated images with the following characteristics:
• Having clear gray scale value histograms.
• Show pores, solid materials with self-similar properties.
• It also includes pebble and pore space.
• Showing low contrast CT simulated image even at the soil pore space.
Structural Procedure for simulated CT image generation
The structural procedure for simulated CT image generation includes the following
characteristics:
1. Delimitation of the pore space and pebble space in ground truth using Siperenski
multi-fractal method.
2. Assign Histogram of pores, pebble, and solid spaces with the average pore
space 0. Resulting to generate skeleton simulated CT images.
644 M. Arunpandian et al.
Simulated inputs
(1)
Ground-truth (2)
Otsu-Segmented
output
(3)
Maximum
entropy segmented
output (4)
CME-CV
segmented output
(5)
Fig. 1 Simulated soil CT inputs in range (1–5). (1) Simulated soil CT images. (2) Ground truth
soil CT images. (3) Otsu-segmented output. (4) Maximum entropy segmented output. (5) CME-CV
segmented output
2.3 Thresholding
Thresholding is a common method for converting a gray scale image into binary
image with the help of specified threshold level that separates the foreground and
background pixels in an image model. It is basically divided into bi-model or global
thresholding [7] and multi-model or local thresholding [10, 11].
Otsu thresholding
Otsu in 1979 introduced the calculation of optimal thresholding by separating images
into two classes, namely, the foreground and background objects. It maximizes the
interclass variance or minimizes the intraclass variance between the neighboring
pixels [12, 13]. Here, the optimal threshold value (k) is determined through the
maximization of the interclass variances using Eq. 6.
The probability of the image pixels is usually represented as shown in Eq. (1)
ni
L
pi ; pi 1 (1)
N i0
where “L” is defined as the maximum gray level intensity, and N is the total
number of pixels present in an image. ni is the frequencies of an individual pixel
value.
k
i ∗ pi
μ0 c0 ∈ [0, 1, 2 . . . , k] (2)
i1
w0
L
i ∗ pi
μ1 c1 ∈ [k + 1, . . . , L] (3)
ik+1
w1
k
(i − μ0 )2 ∗ pi
σ02 (4)
i1
w0
L
(i − μ1 )2 ∗ pi
σ12 (5)
ik+1
w1
σw2 arg max w0 (k) ∗ σ02 (k) + w1 (k) ∗ σ12 (k) (6)
In Eqs. (2) and (3), c0 and c1 are the foregrounds and background pixels of an
image, respectively. w0 and w1 are the computed class probability of c0 and c1 . μ0
and μ1 are the average pixel values in the ranges of 1 to k, k + 1 to L, respectively.
σ02 and σ12 used in Eqs. (4) and (5) are the class variances of the foreground and
background classes.
Maximum entropy method
Entropy of an image represents the fact that comprises foreground and background
objects toward the probability distribution of the intensities at various levels. Entropy
646 M. Arunpandian et al.
is the quantity which is used to describe the functioning of an image. The optimal
threshold value (k) is used to separate the foreground from background [14].
k
Ha − pi ∗ loge pi (7)
i1
L
Hb − pi ∗ loge pi (8)
ik+1
Optimal threshold value (k) arg max{Ha + Hb } (9)
In Eqs. (7) and (8), the value of L gives maximum gray level intensity, Ha , and Hb
are the entropy of foreground and background gray levels. Let the optimal threshold
(k) maximizes the entropy and is shown in Eq. (9).
Combined Maximum Entropy-Class Variance Thresholding (CME-CV)
In this methodology, the thresholding is concentrated on both neighborhood and edge
pixels of soil pores and pebble spaces in order to get accurate segmentation. As a
novel step, the author intends to combine both the edges and covering space area of
all Pores with Solid and pebble spaces. The optimal thresholding (k) value in Eq. (9)
is modified in such a way, so that it maximizes both the interclass and entropy. The
modified thresholding value (k) is given in Eq. (10). Comparably, the amount of pores
spaces and void ratio of simulated samples are calculated and validated by optimal
thresholding methods.
Table 2 portrays the statistical analysis of simulated image samples using the CME-
CV method, and it is compared with skeleton ground truth and simulated soil CT
images. Otsu, maximum entropy, and CME-CV methods were examined both visu-
ally and by using validation parameters such as porosity, void ratio, and misclassifi-
cation error. This system is used to compute a total of 50 simulated soil CT images.
Misclassification Error (ME)
Using ground truth of corresponding simulated CT image, the various thresholding
methodologies are validated using Misclassification Error (ME) metric. Misclassifi-
cation error is defined as the measure of mismatching pixels between ground truth
and segmented image [9].
P+S
ME 1 − (11)
T
Table 2 Simulated soil CT samples
Input Optimum threshold value GT-porosity Porosity (%) Void ratio (%)
(%)
OTSU ME CME-CV OTSU ME CME-CV OTSU ME CME-CV
Simulated 73 134 90 3.35 3.65 4.15 3.70 3.79 4.32 3.84
input-1
Simulated 71 134 83 7.32 7.92 8.35 7.60 8.60 9.11 8.22
input-2
Simulated 70 136 77 13.37 14.16 15.74 13.67 16.50 18.67 15.83
input-3
Simulated 72 138 77 17.7 18.64 19.45 18.00 22.92 24.15 21.95
Soil Porosity Analysis Using Combined Maximum Entropy …
input-4
Simulated 70 139 76 23.38 24.78 25.63 23.90 32.95 34.47 31.40
input-5
647
648 M. Arunpandian et al.
20
RELATIVE POROSITY
ERROR OTSU (%)
15
RELATIVE POROSITY
ERROR Maximum Entropy
10 RELATIVE POROSITY
ERROR CME-CV
5
0
SIMULATED SIMULATED SIMULATED SIMULATED SIMULATED
INPUT-1 INPUT-2 INPUT-3 INPUT-4 INPUT-5
MISCLASSIFICATION ERROR
MISCLASSIFICATION ERROR (%)
1.4
1.2
1 MISCLASSIFICATION
0.8 ERROR (%) OTSU
MISCLASSIFICATION
0.6
ERROR (%) Maximum Entropy
0.4 MISCLASSIFICATION
0.2 ERROR (%) CME-CV
4 Conclusion
Otsu and maximum entropy methods have been already proven to be an efficient
method for thresholding-based segmentation. But the main hindrances at Otsu
methodology was that it has no clear bounded edge and the drawback of entropy
method was, it has no sufficient neighboring pixels. The ME and relative porosity
error rate are maximum in both Otsu and maximum entropy methods for all the
simulated soil CT images, which tends to reduce the accuracy. So, CME-CV method
pursued in this work is getting maximum accuracy and optimal thresholding in both
visual perceptions and soil validation parameters. In future, a system for soil–water
sustainability and soil–air sustainability can be built using the obtained void ratio
and porosity, which becomes the major decider in soil agrology system.
References
1. Miller BA, Schaetzl RJ (2015) History of soil geography in the context of scale. Geoder-12090;
(17)
2. Martín-Sotoca JJ, Saa-Requejo A, Grau JB, Tarquis AM (2016) Local 3D segmentation of soil
pore space based on fractal properties using singularity maps. Geoderma
3. Sauzet O, Cammas C, Gilliot J-M, Bajard M, Montagne D (2017) Development of a novel
image analysis procedure to quantify biological porosity and illuvial clay in large soil thin
sections. Geoderma 292:135–148
4. Rodríguez-Lado L, Lado M (2016) Relation between soil forming factors and scaling properties
of particle size distributions derived from multi-fractal analysis in topsoil’s from Galicia (NW
Spain). Geoderma
5. Sahoo PK, Soltani S, Wong KC (1988) A survey of thresholding techniques. Comput Vis Graph
Image Process 41:233–260
6. Martín-Sotoca JJ, Saa-Requejo A, Grau JB, Paz-González A, Tarquis AM (2017) Combining
global and local scaling methods to detect soil pore space. J Geochem Explor
7. Abera KA, Manahiloh KN, Nejad MM (2017) The effectiveness of global thresholding tech-
niques in segmenting two-phase porous media. Constr Build Mater 142:256–267
8. Dathe A, Eins S, Niemeyer J, Gerold G (2001) The surface fractal dimension of the soil–pore
interface as measured by image analysis. Geoderma 103:203–229
9. Hyväluoma J, Kulju S, Hannula M, Wikberg H, Källi A, Rasa K (2017) Quantitative charac-
terization of pore structure of several biochars with 3D imaging. https://doi.org/10.1007/s113
56-017-8823-x
10. Arunpandian M, Arunprasath T, Vishnuvarthanan G, Pallikonda Rajasekaran M (2017) Thresh-
olding based soil feature extraction from digital image samples—a vision towards smarter
agrology. In: Information and communication technology for intelligent systems. Smart inno-
vation, systems and technologies, vol 1, 83. https://doi.org/10.1007/978-3-319-63673-3_55
11. Chang J-S, Liaob H-YM, Herb M-K, Hsieh J-W, Cherna M-Y (1997) New automatic multi-level
thresholding technique for segmentation of thermal images. Image Vis Comput 15:23–34
650 M. Arunpandian et al.
12. Otsu N (1979) A threshold selection method from gray-level Histogram. IEEE Trans Syst Man
Cybern 9(1). 0018-9472/79/0100-0062$00.75
13. Zhu N, Wang G, Yang G, Dai W (2009) A fast 2D Otsu thresholding algorithm based on
improved histogram. IEEE, 978-1-4244-4199-0/09
14. Kapur JN, Sahoo PK, Wong AKC (1985) A new method for gray level picture thresholding
using the entropy of the histogram. Comput Vis Graph Image Process 29:273–285
Automatic Segmentation of Gallbladder
Using Intuitionistic Fuzzy Based Active
Contour Model
Abstract The automatic and accurate image segmentation method is very essential
requirement in image processing in various fields. Computer-based algorithms are
needed to attain precise segmentation and classification results. In segmenting gall-
bladder, there are only very few automatic segmentation approaches. With a prima
facie to exercise the fuzzy nature of energy equations, this work develops an intu-
itionistic fuzzy based active contour model for the process of segmenting B mode of
ultrasound medical scan images. In preprocessing, histogram modification process
and DooG filtering method were employed for amelioration of the quality of input
image. Subsequently, task of boundary demarcation is performed by utilizing intu-
itionistic fuzzy based active contour model. The proffered method is validated by
effectively comparing the inferred results over other conventional boundary demar-
cation techniques.
1 Introduction
2 Literature Review
This section put forwards a brief discussion of previous works involved in gallblad-
der segmentation with their advantages and their disadvantages [2, 3]. Active con-
tour models have been extensively implemented to carve the gallbladder by Marcin
Ciecholewski [4]. Two effective descriptors of ACM such as the equation for mem-
brane and motion equation are used to empirically trace the outline of gall shape
and structure of polyps. Subsequently, the rest of the components in the image that
was superfluous will be removed from the input medical image. To the statistical
analysis of 600 input US diagnostic scan images, the absolute values of Dice Coef-
ficient (DSC) were ensured in the service of comparison between traditional ACMs
and their average values ranges approximately to 81.8%. The notion of histogram
transformation was administered to convalesce the contrast of the input image. As
a comparative investigation in 2011 [5], they incorporated the adaptively boosted
SVM for tracking down and typecasting lesions whether it is a lithiasis or polyps. It
is observed that there is a considerable decrease in accuracy value of 91% that was
procured while discriminating lithiasis and during assorting, the tissue as lithiasis,
and only a percentage of 80 in segregating structure of polyps, and 78.9% is procured
in cases that have both. As a milestone of gallbladder shape analysis [6], Ogiela et
al. completed the task of dissecting gall shape in an image by bestowing different
processes. They have adopted Binarization and histogram analysis. Xie et al. [7] have
propounded level set model for segmenting gallbladder. The prevailing view offered
by the author as described as observations [8] are not assessed with numerous input
images, even though this approach gives greater flexibility for gallbladder without
lesions. Thus, the study encompasses more significantly the application of fuzzy
concepts in gallbladder segmentation.
Automatic Segmentation of Gallbladder … 653
Fig. 1 Block diagram of the proposed intuitionistic fuzzy active contour model
The dataset for actualizing the recommended scheme encompasses 180 diagnostic
frames (images) which were acquired from victims of cholelithiasis. It has been
acquired from a diagnostic center in India.
Preprocessing becomes a vital task in ultrasound image processing [9, 10]. Con-
sidering B as the resultant image for input image A after modifying histogram with
equalization measures, p and q are regarded as the heterogeneous gray-level approx-
imations of A image and B image, respectively. After the alteration of the histogram
on p and q, the resultant will be H ( p) and H (q). In signal processing operations,
it usually represents the density functions of gray level (probability supported). The
transformation function in Eq. (1) generates a gray-level value (qi ) as equivalent to
the ( pi ).
q = F( p) (1)
1 −(a 2 +b2 )
G(a, b) = √ e 2σ 2 (2)
2π σ
∂Gσ (a, b) −a
Gσ (a, b) = = 2 Gσ (a, b) (3)
∂α σ
the horizontal axis of DooG is represented in Eq. (4).
m−1
Y
in which the problem-specific equation v(s) delineates the curve topography, E i sym-
bolizes the indigenous potential energy, E e is the potency that imparts the extrin-
sic conditions to the contour outline produced, and E p corresponds energy factor
Automatic Segmentation of Gallbladder … 655
acquired from intrinsic features of the image. The representation of the energy equa-
tion in the discontinuous mode is highly contributive in the machine-based modeling
of retractable models in Eq. (8)
Ym−1
Ey = [E i (v(y)) + E e (v(y)) + E p (v(y))] (8)
s=0
Let “A” be an Intuitionistic fuzzy set in an object “E” represented in Eq. (9)
n
d I F S (A, B) = (| μ A (xi ) − μ B (xi ) | + | ν A (xi ) − ν B (xi ) | + | π A (xi ) − π B (xi ) |)
i=0
(10)
The existence of obsolete calcification makes the process of gall shape extrication
which is the essential detail which a radiologist has an inclination for. 1.a–4.a of Fig.
2 manifests the four input of medical images contained in the dataset. 1.b–4.b of Fig.
2 delineates the ability of proposed intuitionistic fuzzy based active contour model in
drawing out gallbladder from the clinical image. It is noticeable that substantial effort
of preprocessing contributed to effacing noise and also the interference caused due
to acoustic shadows. The repercussed outcome that eliminates superfluous details
is retrieved after the postprocessing operations, and it has been illustrated in which
is an inherent result of the preprocessing process which removes speckle noise and
in the input images. Features of the successful outcome of preprocessing the input
diagnostic image will give an additional help to the process of demarcation. Eventual
outline of the gall region excluding the insignificant details accumulated in given US
diagnostic scan image is reaped subsequently on postprocessing and exemplified in
1.c–4.c of Fig. 2. Statistical analysis of the well-known quality indicator is shown
in Table 1. The effectiveness of intuitionistic fuzzy based active contour model was
directly pointed out by the highest value of qualitative measurements.
656 V. Muneeswaran and M. Pallikonda Rajasekaran
The outline of the gallbladder and gallstone are extensively highlighted by the ultra-
sound scan image with the help of fuzzy incorporated algorithms. In this paper,
intuitionistic fuzzy sets based active contour method is proposed and it precisely
articulates the silhouette of gallbladder and lineates the profile of gallstone in the
diagnostic US image. The values of qualitative parameters like sensitivity, speci-
ficity, and accuracy are more balanced than those procured in previous works. From
the relative experiments, it has been perceived that articulated gall shape from the
intended practice was analogous to experts demarcation.
Acknowledgements The authors thank the Department of ECE, Kalasalingam University, for
permitting to use the computational facilities available in Centre for Research in Signal Processing
and VLSI Design which was set up with the support of the Department of Science and Technology
(DST), New Delhi under FIST Program in 2013 (Reference No: SR/FST/ETI-336/2013 dated
November 2013).
References
1. LaRocca CJ, Hoskuldsson T, Beilman GJ (2015) The use of imaging in gallbladder disease.
In: Eachempati S, Reed R II (eds) Acute cholecystitis. Springer, Cham, pp 41–53
2. Muneeswaran V, Pallikonda Rajasekaran M (2018) Gallbladder shape estimation using tree-
seed optimization tuned radial basis function network for assessment of acute cholecystitis. In:
Bhateja V et al (eds) Intelligent engineering informatics, advances in intelligent systems and
computing, vol 695. Springer, Cham
3. Muneeswaran V, Pallikonda Rajasekaran M (2018) Automatic segmentation of gallbladder
using bio-inspired algorithm based on a spider web construction model. J Supercomput. https://
doi.org/10.1007/s11227-017-2230-4
4. Ciecholewski M, Chocholowicz J (2013) Gallbladder shape extraction from ultrasound images
using active contour models. Comput Biol Med 43:2238–2255
5. Ciecholewski M (2011) AdaBoost-based approach for detecting lithiasis and polyps in USG
images of the Gallbladder. In: Badioze Zaman H et al (eds) Visual informatics: sustaining
research and innovations, IVIC 2011. Lecture notes in computer science, vol 7066. Springer,
Berlin, Heidelberg
6. Bodzioch S, Ogiela MR (2009) New approach to gallbladder ultrasonic images analysis and
lesions recognition. Comput Med Imaging Graph 33:154–170
7. Xie W, Ma Y, Shi B, Wang Z (2013) Gallstone segmentation and extraction from ultrasound
images using level set model. In: ISSNIP biosignals and biorobotics conference: biosignals
and robotics for better and safer living (BRC). Rio de Janerio, pp 1–6
8. Ciecholewski M (2010) Gallbladder boundary segmentation from ultrasound images using
active contour model. In: Fyfe C, Tino P, Charles D, Garcia-Osorio C, Yin H (eds) Intelligent
data engineering and automated learning, IDEAL 2010. Lecture notes in computer science, vol
6283. Springer, Berlin, Heidelberg, pp 63–69
9. Muneeswaran V, Pallikonda Rajasekaran M (2017) Analysis of particle swarm optimization
based 2D FIR filter for reduction of additive and multiplicative noise in images. In: Arumugam
S, Bagga J, Beineke L, Panda B (eds) Theoretical computer science and discrete mathematics,
ICTCSDM 2016. Lecture notes in computer science, vol 10398. Springer, Cham
658 V. Muneeswaran and M. Pallikonda Rajasekaran
Abstract This paper illustrates the design and implementation of the speed control
of DC motor in real time using Matlab and Arduino Due board with conventional
and nonconventional methods. In the conventional technique, the PID controller is
implemented using two different modeling approaches, viz., first principles modeling
and data-driven modeling. In nonconventional techniques, an inverse neural network
controller is trained with three layers using backpropagation algorithm, concurrent
relay-based PID controller consists of a parallel PID controller with dead band relay,
and a neuro-fuzzy controller is designed and implemented using the adaptive neuro-
fuzzy interface system. The responses of simulation and in real time of closed-loop
speed control of DC motor are presented. A comparative study of time response
characteristics analysis has been carried out and observed that the performance of
the ANFIS controller performs better control than other control methods used in this
paper.
1 Introduction
DC motors are used as an actuator and act as a final control element in many process
control systems. In control systems, the controller manipulates the actuator based
on the error signal. DC motors have been widely used in most of the applications in
avionics, process plants, automobiles, rolling mills, consumer products, etc. In such
applications, speed and the position of the motor need to be controlled accurately, to
improve the efficiency and optimum performance of the system. Various conventional
and nonconventional techniques are currently in use to control the speed of a DC
motor in literature as PID controller, fuzzy logic, neural networks, hybrid models, etc.
Traditional methods like P, PI, and PID controllers are commonly used in many
applications as it is simple in design. The accuracy of the system response highly
depends on the tuning parameters of the PID controller and the governing equations
of the mathematical model [1]. Conventional control theory is suitable to model lin-
ear systems, and it is difficult to model the complex nonlinear systems where the
mathematical model is indecisive. In the past decade, several nonconventional tech-
niques have been developed and proven to overcome the difficulties in the modeling
of complex nonlinear dynamical systems such as sliding mode control, model refer-
ence adaptive control, fuzzy logic, neural network, and hybrid models like fuzzy-PID,
neuro-fuzzy, and genetic algorithms. In this paper, the design and implementation
of speed control of DC motor with Matlab software and Arduino Due hardware
are presented. Conventional PID method and nonconventional methods like inverse
neural network, neuro-fuzzy, and concurrent relay-based PID were implemented. A
comparative study is carried out in both simulations and in real time.
2 Experimental Setup
Figure 1 shows the block diagram of experimental setup considered in this paper.
Transcoil 12 V DC motor with inbuilt tachometer which runs at the maximum
speed of 4500 rpm is used. The sensitivity of the DC motor is 1.9 V/1000 rpm. At
maximum speed, the output voltage from the tachometer gives 8.5 V; however, the
Arduino Due microcontroller measures the maximum analog input voltage range
of 0–3.3 V. A level shifter is required to reduce the voltage of 0–8.5 V to 0–3.3 V
range is designed. The noise from the output of the tachometer is eliminated by an
RC low pass filter. The voltage from the tachometer is measured with Arduino Due
board with a sampling rate of 0.01 s and converted into speed [2]. The speed of the
DC motor is controlled using L293D H-bridge controller. PWM signal required for
the desired speed is calculated and sent to the H-bridge controller [3]. The overall
circuit diagram and experimental setup are shown in Figs. 2 and 3.
etc. In this paper, PID controller is designed using the first principles model and
data-driven model.
First principles model requires the knowledge of fundamental governing laws. The
mathematical model of the DC motor is as shown in Fig. 5. The transfer function
of the speed versus input voltage is shown Eq. (1). The model of DC motor can be
designed through either Matlab Simulink toolbox or Simscape toolbox. Most of the
times all the parameters may not be available from the manufacturer to design the
model. Some of the parameters not specified by the manufacturer for the above motor
are Bm (Damping Coefficient), Kb (Back EMF constant).
w(s) Km
(1)
Va (s) La · Jm S2 + (Ra · Jm + La · Bm )S2 + (Ra · Bm + Kb · KT )S
Data-driven modeling is used when the transfer function of the system cannot be
derived from the mathematical modeling. System identification tools are used to
estimate either a linear or nonlinear transfer functions from the acquired input–output
data of the system. Figure 7 shows the linear transfer function estimation from the
system identification tools.
0.0313Z−3
TF (2)
1 − 1.974Z−1 + 0.9767Z−2
A widely used feedback controller in process control is PID controller due to its
simplicity in design [4]. The output of the controller depends on the present, past,
and future error. The governing equation of the PID controller is
de(t)
u(t) K p e(t) + K i ∫ e(t)dt + K d (3)
dt
where Kp , Ki , Kd are the constants of the controller proportional, integral, and deriva-
tive control. The main challenge of the PID controller is tuning of the PID parameters
can be performed either online or offline. Several offline tuning techniques such as
Ziegler Nicholas and open reaction curve method exist in the literature.
In this paper, tuning of the PID controller has done in offline with Matlab automatic
tuning toolkit from the model obtained through either first principles or data-driven
modeling technique. Figure 8 shows the Simulink model to tune PID constants, and
Fig. 9 shows the parameters tuned by the toolkit. Figure 10 shows the Simulink model
of closed-loop controller in real time which measure the speed from the Arduino Due
board and generates the PWM signal by desired for the PID controller.
3 Nonconventional Modeling
Conventional techniques are still widely used in process industries due to simple
in design, less cost, and not appropriate for nonlinear systems [5]. Obtaining the
mathematical model for nonlinear complex systems is difficult consequently; the
intelligent controlling techniques such as fuzzy logic, neural network, and hybrid
models are used to achieve optimum performance in nonlinear systems.
Figure 11 shows the block diagram of an inverse neural network controller. Neural
networks are used in control of nonlinear systems due to its ability of learning. The
network is trained to act as the inverse of the system and used as a controller [6]. In
this paper, the network is trained with four inputs y(k + 1), y(k), y(k − 1), y(k − 2)
and u(k) where u(k) is the input voltage of motor and y(k) is the speed of the motor
as output. After the training, y(k + 1) is replaced with the set point. Equation 3 shows
the output of the inverse model is
u(k) f 2 (y(k + 1), y(k), . . . , y(k − n − 1), u(k − n), . . . , u(k − m + 1) (3)
666 B. Udaya Kumar and M. Ramesh Patnaik
Guomin Li proposed a hybrid model in 2007 by adding a dead band relay and a limited
integrator to the PID controller. Dead band relay outputs a signal when the error is
large and switched off when the error is small [7]. The purpose of the integrator
is used to reducing the settling time. The lead compensator is to adjust the gain
margin and phase margin to keep the closed loop system is stable. This hybrid model
improves the system response when compared to PID alone. Figure 13 shows the
block diagram of CRPID controller.
τs+1
The lead compensator gain Gl ατs+1 , dead band relay, and limited integrator
kai d are calculated from above equations, where d is the maximum controller
6ki
output and h is the maximum and T is the time constant and placed parallel to PID
controller.
Comparison of Speed Control of DC Motor in Real Time … 667
Hybrid models such as fuzzy neural and neuro-fuzzy controllers offer the advantages
of both the controllers. In 1993, Jang introduced the concept of an intelligent approach
known as ANFIS which combines the merits of a fuzzy logic with optimized rules and
membership functions and neural network models to solve nonlinear and complex
models with good precision [8]. From the sampled input–output data, the Matlab
ANFIS tool generates a fuzzy file and its member functions that are tuned with a
neural network. Figure 14 shows the ANFIS tool to design a neuro-fuzzy controller.
Figure 15 shows the Simulink model for implementation of ANFIS model in real
time.
The experiment has been successfully conducted at different speeds and the practical
values are compared with the theatrical values matched satisfactorily and observed
that the speed of the motor follows the set point. Figure 16 shows the time response
characteristics of all the models observed at a speed of 2000 rpm. The rise time and
settling time of all the models are listed in Table 1. It is observed that the performance
of ANFIS controller is better among the remaining methods used in this paper.
References
1. Udaya Kumar B, Ramesh Patnaik M (2017) Comparative analysis of real time discrete PID
controller design using first principles and data driven model. IJAREIE 6:1111–1121
2. Zaki AM, El-Bardini M, Soliman FAS, Sharaf MM (2015) Embedded two level direct adaptive
fuzzy controller for DC motor speed control. Ain Shams Eng J
3. Petru L, Mazen G (2015) PWM control of a DC motor used to drive a conveyor belt. Procedia
Eng 100:299–304
4. Puangdownreong D, Nawikavatan A, Thammarat C (2016) Optimal design of I-PD controller
for DC motor speed control system by Cuckoo search. Int Electr Eng Congr 86:83–86
5. Aziz Khater A, El-Bardini M, El-Rabaie NM (2015) Embedded adaptive fuzzy controller based
on reinforcement learning for DC motor with flexible shaft. Arab J Sci Eng 40:2389–2406
6. Buzi E, Marango P (2013) A comparison of conventional and nonconventional methods of DC
motor speed control. In: 15th IFAC workshop on international stability, technology and culture,
vol 46, pp 50–53
7. Li G, Tsang KM (2007) Concurrent relay-PID control for motor position servo systems. IJCAS
5:234–242
8. Jayetileke HR, de Mel WR, Ratnayake HUW (2014) Real-time fuzzy logic speed tracking
controller for a DC motor using Arduino Due. In: 7th international conference on information
and automation for sustainability. IEEE
Design of Electronic Security System
in Restricted Areas on MSP430 Processor
1 Introduction
Unauthorized humans and fire are the main concerns for safety issues in shopping
malls, forest areas, bus stations, etc. Therefore, to provide safety, we introduce an
Electronic Security System (ESS). In earlier days, IR-based Normal EPS (NEPS) was
developed. In this, whenever a person walks across the device, then IR is activated.
Although this device is efficient in detecting the intruders, the response time to detect
the fire is high. But the usages of NEPS are limited since the IR sensors are limited
to 30 m LOS.
To conquer problems/issues, we introduce PIR-based ESS with high accuracy
and low latency in detecting the flame and intruder. The PIR itself act as a single-
pixel camera. The coverage area of a PIR sensor varies with regard to the type of
PIR sensor. In this, we are utilizing the IS9B PIR sensor with a radius of 5 m, this is
sufficient to cover up the majority rooms with a lofty ceiling. As a result, PIR offers a
cost-effective result to recognize the unauthorized persons and fire issues, especially
in large rooms. In this HMM-based fire, flicker method is used for recognizing the
flame in videos. To realize HMM, wavelets are used because these signals are do not
influence the slow variations occurring in moving scene.
The rest of the paper is organized as follows: Sect. 2 describes the technical back-
ground of the proposed method; Section 3 describes the hardware implementation of
ESS; Section 4 describes the data processing and HMM models; Section 5 describes
experimental results; and Section 6 describes the conclusion.
2 Technical Background
The PIR is sense by IR radiating moving objects within the range. The PIR produce
“logic 1” when detecting the flame moving the object and “logic 0” when detecting
the no-flame moving objects. In this, only moving persons are recognized. Instead of
using directly PIR output, from the PIR analog information is extract and then it is
sampled. As per consequence, signal processing strategies can be developed and then
feed to HMM. After recognizing the moving object or flame then processor provides
the signal to the buzzer and displays on the LCD. In this manner, it is possible to build
up intruder and flame recognition strategies. Figure 1 represents the flow diagram of
electronic security system.
3 Hardware Implementation
The proposed method requires two kinds of power supplies. They are
• Power 1 (3 V) and
• Power 2 (5 V).
Design of Electronic Security System … 673
MSP430FG4618
If PIR==1 No
(HMM)
Yes
LCD ALARM
PIR LCD
Sensor Display
MSP430FG4618
It captures the analog signal and generates the binary output from read-out circuit.
Whenever a motion occurs within the coverage area due to a hot body, then the
strength of the signal will be increased. This phenomenon occurs due to variations in
atmospheric temperature. In this, the differential PIR sensor distinguishes between
intruder and flame. Figure 3 represents an illustration for generation of an analog
signal from PIR and is adapted from [1].
674 K. Rasool Reddy et al.
Figure 4 represents the experimental board setup for capturing the analog signal
output from PIR sensor.
Fig. 3 The third generation of a circuit diagram for generation of an analog signal from PIR
Fig. 4 The experimental board setup for capturing the analog signal output from PIR sensor
Design of Electronic Security System … 675
Fig. 6 Digital output of PIR sensor when there is no activity within the range
Fig. 7 Analog output of PIR sensor when there is no activity within the range
The output of the PIR goes to the controller. The controller process the sensor output
and the signal are given a signal is given to LCD and alarm. If the intruder or flame
is identified then the corresponding message is sent to security rooms through LCD,
alarm, and GSM. Table 1 represents AT commands that are sent to mobile through
GSM.
Design of Electronic Security System … 677
Here, two kinds of triple hidden Markov models are introduced to identify the fire,
motion of human being and these models are trained by wavelet transform of PIR
signals. After training the PIR signals, the sensor signals are fed to HM Models. These
models’ generating the highest probability determines the event (fire or no-fire) of
the signals.
Generally, there is a bias in the output of PIR and it varies in accordance with room
temperature. The bias in PIR is removed by bi-orthogonal wavelet transform. Let us
consider x(n) be a discrete-time signal. The coefficients of wavelet signals [w(k)] are
obtained from first stage decomposition, analogous to frequency information of the
x(n) is determined by integer arithmetic HPF, analogous to Lagrange wavelets [3]
followed by decimation.
678 K. Rasool Reddy et al.
Fig. 10 Two triple-state hidden Markov models are used to classify a flame and b no-flame events
is satisfied.
5 Experimental Results
The analog signal is sampled and quantized (8 bits) with a sampling frequency of
50 Hz. Examination and categorization strategies are developed with C++ running on
a PC. The sampled version of the analog signal is fed to the PC through the RS-232
serial port.
In this paper, we are utilizing the IS9B PIR sensor with a radius of 5 m, this is
sufficient to cover up the majority space. In our system, we record flame or no-flame
events at a distance of 3 m. For flame data, we burn a piece of paper and record the
PIR output. For the non-flame data, we record walking person sequences.
Figure 13 shows the experimental setup of the proposed method. The proposed
system consists of GSM, LCD, MSP430, and PIR detector. Sensor information con-
sists of different intruder actions and fires at a radius of 3 m are worn for working
out the HMMs analogous to unusual actions [5].
680 K. Rasool Reddy et al.
Table 2 shows the results of five fires, five non-fire test sequences. The system
triggers an alarm when the fire is detected within the viewing range of the PIR sensor.
References
1. Erden F, Ugur Toreyin B, Birey Soyer E, Inac I, Gunay O, Kose K, Enis Cetin A (2012) Wavelet
based flickering flame detector using differential PIR sensors. Fire Saf J 53:13–18. 0379-7112/$-
see front matter & Elsevier Ltd
2. Sathishkumar M, Rajini S (2015) Smart surveillance system using PIR sensor network and GSM.
Int J Adv Res Comput Eng Technol (IJARCET) 4(1)
3. Kim CW, Ansari R, Cetin AE (1992) A class of linear-phase regular biorthogonal wavelets. In:
Proceedings of the IEEE ICASSP’92, pp 673–676
4. Toreyin BU, Soyer EB, Urfalioglu O, Cetin AE (2008) Flame detection system based on wavelet
analysis of PIR sensor signals with an HMM decision mechanism. In: Proceedings of EUSIPCO
5. Phillips W, Shah M, da Vitoria Lobo N (2002) Flame recognition in video. Pattern Recognit Lett
23(1–3):319–327
LTE-D2D Assisted Communication
in Smart Grid Neighborhood Area
Networks
Abstract Deployment of charging stations for electric vehicles and phasor mea-
surement units for wide area monitoring have put stringent requirements in terms of
real-time data handling capacity and latency on the communication infrastructure in
smart grids. Next-generation cellular standards and Device-to-Device (D2D) com-
munication technology offer promising solutions to address these challenges. This
paper proposes a hierarchical communication infrastructure utilizing LTE and LTE-
D2D modes for information exchange in smart grid Neighborhood Area Network
(NAN). Direct link between smart meter and NAN gateway is replaced by two-hop
relay communication through local data aggregators. A set of smart meters form
a cluster and transmit their status reports periodically to their local aggregator in
LTE-D2D assisted mode. Each local aggregator then forwards cumulative reports
to regional data aggregator via LTE mode. The proposed framework reduces band-
width requirement and total uploading time for networks involving high density of
devices and producing a large volume of data. Performance of the proposed archi-
tecture is compared with WLAN assisted and LTE-Direct transmission modes for
NAN communication.
1 Introduction
The concept of Smart Grid (SG) enables distributed generation, intelligent load con-
trol, and participation of end users in the operation of electric grids. It equips the
grid with automatic fault detection, self-healing, monitoring of power quality, billing
(a) Home Area Network (HAN): Power Line Communication (PLC), ZigBee repre-
sented by IEEE 802.15.4, IEEE 802.11 standard-based WLANs, and Bluetooth
technology are available for link setup within such network as it has low band-
width requirements offering data rate up to 100 kbps.
(b) Neighborhood Area Network (NAN): An NAN comprises multiple HANs. An
NAN gateway is a relay node, which can be a pole-mounted device or a microcell
evolved Node Base station (eNB). Different wireless technologies that may be
used for such communication include IEEE 802.15.4g. This is an amendment
to the earlier version of ZigBee alliance for longer range communication-IEEE
802.11s. It extends the MAC layer of IEEE 802.11 for longer range issues.
Others are IEEE 802.11ah for large wireless networks, and WiMAX or LTE-
Advanced. NAN communication has a data rate requirement up to 10 Mbps.
SGs also act as a gateway between HAN and NAN.
(c) Wide Area Network (WAN): This top tier covers NAN gateway, control center,
power generation, and dispatch center. The core network is used for data transfer
between NAN gateway and utility provider’s control center. It requires high
bandwidth and data rate up to 1 Gbps hence requires cellular network or optical
fiber communication. Automatic Metering (AM), active demand response, and
Distribution Automation (DA) are important applications of SG framework.
Choice of communication infrastructure plays an important role in successful
implementation of these applications. It needs to be designed to satisfy diverse QoS
requirements of different applications. Various communication infrastructure models
utilizing different wired and wireless technologies have been proposed to increase
the reliability, scalability, and security of evolving SG framework [4]. Choice and
adoption of communication technology depend on delay tolerance and amount of
data to be transmitted within that tier. 5G cellular technology is, therefore, seen as
the most suitable candidate as it offers high data rate, higher reliability, and high data
security. However, cellular networks are becoming congested due to the scarcity
of radio resources; therefore, heterogeneous networks and D2D communication are
opted to accommodate the increased number of nodes.
In this work, we propose a model using LTE-D2D communication for NAN and
compare its performance with those utilizing WLAN and conventional LTE for neigh-
borhood area communication.
Rest of the paper is organized as follows. Section 2 describes precious work related
to enhancement in SG communication infrastructure along with their advantages and
challenges. Section 3 presents the system model proposed for LTE-D2D assisted
NAN communication. Section 4 gives the performance analysis of the proposed
architecture followed by the conclusion drawn in Sect. 5.
686 S. Sharma and B. Singh
2 Related Work
SG solution was proposed to address the need for self-regulation, greater reliability,
and scalability in electric power grids [5]. An IP-based heterogeneous architecture is
presented in [6] for improved performance of wide area monitoring including home
building and substations. They have utilized PLC for HAN and WiMAX/ZigBee
standards for NAN to accomplish the integration of ultra sensor network with next-
generation SG architecture. Similarly, several studies have been conducted for a
suitable choice of communication standard based on latency and QoS requirements
of respective applications. A mechanism has been designed in [7] to test the suitabil-
ity of a particular communication standard for given SG application. They utilize
topology of the grid and dynamic change in state, i.e., angle and voltage at all buses
of the power system as data for calculating latency. It is concluded that average
bandwidth requirement in HAN is 5–10 Mbps and data rate up to 75 Mbps in NAN.
The tolerable data latency for most SG applications is within 100 ms. Authors in
[8] have proposed an architecture integrating HAN, IED, and NAN into a single
system and using cooperative communication where each node communicates with
data aggregator via relay nodes. Cognitive Radio (CR) based SG architecture has
been presented in [9] as an alternative to heterogeneous communication architecture
in order to increase the reliability and security of the system. They have used CR
technology in all three network tiers of communication layer, i.e., cognitive HAN,
cognitive NAN, and cognitive WAN. But random interrupts in secondary user trans-
mission is seen as a major challenge in CR-based SGs as it introduces traffic delay
and reduces the real-time capability of the system. Also utilizing CR even in HAN
will generate lack of spectrum holes of the licensed spectrum making it more con-
gested. Therefore, the universal standard for CR-based SG communication poses
another challenge. Cellular communication can, therefore, be seen as an alternative
to CR for neighborhood area communication. Various improvements in architec-
ture and protocols are required in cellular networks to make them suitable for NAN
applications. Traffic pattern in SGs is different from human to human communica-
tion traffic. Unlike cellular voice and data traffic, it is sometimes uplink biased, e.g.,
during forwarding fault management report from SMs to utility provider center and
downlink biased in peak hours during forwarding power regulation and management
report from utility provider center to Smart Meters (SM). Deployment of LTE for SG
applications thus requires improved resource management to accommodate a num-
ber of metering devices without affecting the cellular user performance and ensuring
to meet the diverse QoS requirements of different applications [10, 11].
3 System Model
We consider a coexistence scenario composed of a single cell having cellular and elec-
tric distribution links. The hierarchical communication infrastructure proposed for
LTE-D2D Assisted Communication in Smart Grid … 687
NAN is shown in Fig. 2. HAN comprises smart electronic devices, power generation
units in consumer premises (home building/industries) and SMs. The SM collects the
power consumption and generation details from consumer premises using WLAN or
PLC technology. SMs need to deliver the information to the utility provider center.
In the first stage, SM transmits its information to Regional Data Aggregator (RDA)
via Local Data Aggregator (LDA). The LDA transfers a combined report of all SMs
covered within its area to RDA using conventional direct LTE mode. Each LDA
can also communicate with every other LDA using LTE-D2D transmission mode. In
the second stage, RDA transmits the collected information to utility control center.
RDA collects and manages information from multiple LDA and forwards it to utility
provider’s control center which resides at the macro cell Base Station (eNB).
SMs, LDAs, and RDAs are assumed to be assisted with LTE transceiver module. It
is assumed that there are “K” LDAs in an NAN and a set of “N” SMs transmittheir data
to
an LDA. Set of LDAs
and RDAs are represented as L l1 , l2, l3 , . . . , lK and S
s1 , s2, s3 , . . . , sN , respectively. The SM could deliver its status report consisting
generally of 1500-byte length to RDA via LDA. SM to LDA communication can be
performed using LTE-Direct transmission mode, WLAN assisted transmission mode
or LTE-D2D assisted transmission mode. Different scheduling algorithms need to
be devised for LTE-Direct transmission mode and LTE-D2D assisted transmission
mode. In WLAN assisted transmission mode, SM utilizes Carrier Sense Multiple
Access (CSMA) for acquiring the channel. The achievable data rate is determined
as follows:
(i) Direct Transmission Mode: In this mode, eNB assigns orthogonal radio
Resource Blocks (RBs) to SMs in an NAN for transmitting their data to RDA. There
is no co-channel interference in such mode and therefore achievable data rate can be
computed as given in (1) and (2).
688 S. Sharma and B. Singh
Ps Gs,RDA
SINRs,RDA (1)
Pnoise
RDirect B log2 1 + SINRs,RDA (2)
where denotes transmit power of an SM; Gs,RDA is channel gain between SM and
RDA; B is channel bandwidth; and RDirect is data rate achieved on the link between
SM and RDA.
(ii) D2D Assisted Transmission Mode: In this mode, eNB allocates orthogonal
subcarriers to each SM for transmission within one Transmission Time Interval (TTI).
This resource allocation is done in round-robin fashion such that every SM transmits
its report to LDA within the interval. Data transmission from SM to LDA is performed
in LTE-D2D mode. It is assumed that the RBs allocated to SM may be shared with
the cellular user’s uplink resource located in distant NAN area. As varied channel
conditions are experienced onto the two links, the different achievable data rate is
computed as given by (3a, 3b) and (4a, 4b).
Ps Gs,l
SINRs,l (3a)
Pnoise + Pc Gc,BS
RD2D B log2 1 + SINRs,l (3b)
Pl Gl,RDA
SINRl,RDA (4a)
Pnoise
Rl2RDA B log2 1 + SINRl,RDA (4b)
where Gs,l is channel gain between SM and LDA; Gl,RDA is channel gain between
LDA and RDA; Pl is LDA transmit power; Pc is cellular user transmit power; Gc,BS
is channel gain between cellular user and eNB; RD2D is achievable data rate on the
link between SM and LDA; Rl,RDA is data rate achieved on LDA to RDA link; and B
is channel bandwidth.
4 Performance Analysis
The simulation model consists of eNB located at the center of 1000 m cell radius. LDA
is situated at the center of a microcell of 200 m radius within the macro cell. “N”
SMs are uniformly distributed within the microcell. COST-231 Walfisch-Ikegami
path loss channel model is employed for both direct and D2D assisted transmission
modes. The model also takes into account small-scale fading. The propagation link
between SM and LDA, in general, is NLOS resulting in dominant Rayleigh fading
component. The link between LDA and RDA being Los is characterized by a Rician
random variable. System parameters for a simulation model are listed in Table 1.
The time to transfer the status report of “N” SMs to RDA is defined as the total
uploading time for one cluster. In case of WLAN assisted and LTE-D2D assisted
NAN communication, uploading time is the sum of the time required to upload the
LTE-D2D Assisted Communication in Smart Grid … 689
150
100
50
0
20 40 60 80 100
Number of Smart Meters
reports from SMs to LDA and that to transmit the collective report from LDA to
RDA. Performance comparison in terms of total uploading time is shown in Fig. 3.
It is observed that in the WLAN assisted scheme the total uploading time increases
linearly with N. This arises because WLAN cannot support simultaneous transmis-
sions as in LTE hence the cumulative uploading time increases. In case of LTE-D2D
assisted case, devices can share their resources with cellular users hence total upload-
ing time is less as compared to other two schemes.
Increasing the number of SMs within an NAN needs a number of channels to
collect their status report. This, in turn, increases bandwidth requirements. Figure 4
shows the increase in required bandwidth to upload the 1500-byte long status report
from SMs to RDA. It can be seen that the bandwidth requirement is large in the LTE-
Direct mode as compared to LTE-D2D assisted mode of communication. This is so
because, in LTE-D2D assisted mode, the higher data rate can be achieved resulting
in the lesser time required to upload the same length of status report as compared
to the LTE-Direct mode of communication. Reduction in uploading time demands
less occupancy of resources hence smaller bandwidth. The difference in bandwidth
demand for two schemes increases with the number of SMs within NAN.
690 S. Sharma and B. Singh
80
Bandwidth Demand(MHz)
70
60
50
40
30
20
10
20 40 60 80 100
Number of Smart Meters
5 Conclusion
References
7. Kansal P, Bose A (2012) Bandwidth and latency requirements for smart transmission grid
applications. IEEE Trans Smart Grid 3(3):1344–1352
8. Ahmed MH, Alam MG, Kamal R, Hong CS, Lee S (2012) Smart grid cooperative communi-
cation with smart relay. J Commun Netw 14(6):640–652
9. Le TN, Chin WL, Chen HH (2017) Standardization and security for smart grid communications
based on cognitive radio technologies—a comprehensive survey. IEEE Commun Surv Tutor
19(1):423–445
10. Kalalas C, Thrybom L, Alonso-Zarate J (2016) Cellular communications for smart grid neigh-
borhood area networks: a survey. IEEE Access 4:1469–1493
11. Cheng P, Wang L, Zhen B, Wang S (2011) Feasibility study of applying LTE to smart grid. In:
Smart grid modeling and simulation (SGMS), pp 108–113
Design and Validation of Transverse
Electromagnetic (TEM) Cell
for Measurement of Pulsed Transients
1 Introduction
The advancements in technology have made the electronics need to run faster at GHz
clocks and also to be miniaturized to as small as possible, to make it be portable for
some applications. The technology made a single device to perform several tasks,
for example, a handheld mobile phone, with its features such as Bluetooth, Wi-Fi,
FM, etc. The integration of these independent devices into a single device to perform
multi-tasking poses a severe threat to be a source of interference between intra- and
inter-interference. These devices need to be tested for its compliance for EMI/EMC
as per national and international standards, prior to release in markets, to study, and
avoid any chance of interference being generated by the devices within and with its
neighboring electronics.
However, testing and calibration of electronic products for EMI/EMC is very
expensive, involving tedious measurements with high-end instruments and Shielded
chambers to shield from the external environment. If the product is being tested is a
small and portable, there is an alternative method to avoid huge investment in testing,
by considering the TEM cell, which mimics similar to the Shielded Chambers. In
this paper, a TEM cell is modeled and analyzed to study its electrical characteristics,
as per requirements of the dimensions of the electronic product being tested.
In olden days, open area test sites (OATS) are built far away from cities to test the
products to avoid any chance of interference from the nearby devices. An anechoic
chamber is a metallic structure where the walls are covered by anechoic material
to absorb EM fields being radiated by the Equipment Under Test (EUT). There is
no chance of external EM signals to enter the chamber and it is a very expensive
solution to qualify the products for EMI/EMC. If we are going for testing of small
products, then we have a passive structure, a TEM cell as an alternative solution. In
this, we can test small products ranging from medical, laboratory to automotive such
as ECUs, biochips, PCB boards, etc.
TEM cell is a rectangular coaxial transmission line and its rectangular coaxial cross
section is spread over a proper length in order to keep a product whose EM properties
have to find out. It consists of two parallel grounding plates and a central conductor
with air as a dielectric material. The EM fields are developed in between the plates.
It is a two-port network in which the first port is considered as the input section and
Design and Validation of Transverse Electromagnetic … 695
the second port as the output section. The tapered sections at both input and output
ports are to adapt standard 50 coaxial connectors [2].
The TEM cells are of two types, viz., Closed TEM cell and Open TEM cell, as
shown in Fig. 1.
Fields created in a TEM cell are basically plane EM waves which are having wave
impedance of free space (i.e., 377 ) as shown in Fig. 2.
A TEM cell functions from 0 Hz (DC) up to a certain usable frequency, decided
by the physical measurements of the cell. TEM cells are used for Radiated emission
testing of small equipment, for Radiated Immunity by testing of small equipment,
for biomedical experiments, and for calibration of RF probes/sensors.
The cutoff frequency of the cell depends upon its dimensions. This is the limitation
of the cell because this constrains the size of a product we can test within the cell.
The E-field strength (V/m) measured at the center point of the workspace of the TEM
cell is known by [3]
√
p Zo V
E (1)
d d
The physical dimensions of a TEM cell depend upon its characteristic impedance
and cutoff frequency [4]. The line impedance of the TEM cell is known by [2, 5]
ηo
Zo a (2)
4 b− 2
π
ln sinh πg
2b
− C
εo
where
2a width of top plate
b separation between the septum and top plate
696 K. Satyaprasad et al.
TEM cell model has been designed by considering values in Table 2. The different
views of the designed geometry are shown in Fig. 3.
The CAD Model has been designed using Altair Hyper Works FEKO 3D EM
Software as shown in Fig. 3.
The voltage to be fed at the input port of the TEM cell is calculated using Eq. (1).
To achieve an electric field strength of 100 V/m in the TEM cell, the voltage to be
fed is computed to be 9 V.
S-parameters are evaluated for the above model and the S-parameters are observed
to meet the defined simulation goals. The S11 parameter for TEM cell for different
frequencies is shown in Fig. 4.
Fig. 3 Prototype for open TEM cell: a top angle view b side view c front view and d overall
geometry
-10
-20
-30
S 11 (dB)
-40
-40
-60
-70
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Frequency (GHz)
0.0
-0.2
-0.4
S12 (dB)
-0.6
-0.8
-1.0
-1.2
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Frequency (GHz)
140
130
120
E-field (V/m)
110
100
90
80
70
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Frequency (GHz)
Fig. 7 a Electric field distribution showing the electric field vectors b surface currents distribution
over the TEM cell
After validating the electric characteristics of the TEM cell, the study is carried out
for pulsed transients. In the event of nuclear bursts and lightning, the energy produced
lasts for a very small time, typically in nanoseconds of duration, sometimes causing
disruption of operation of electrical and electronic products. The study of compliance
of electronic products for such kind of pulse transients has become an important
immunity evaluation criteria.
To understand how typically a pulse propagates through a coaxial cable, which is
most probable victim part of a product, the TEM cell is fed with a pulsed transient.
The following Fig. 9a shows the typical parameters of the pulse fed.
The double exponential signal is excited at the input port of TEM Cell and observed
S-parameters are shown in Figs. 9b and 10. The minimum of −77.28 dB at 11.85 ns
after excitation of the signal. The maximum of −22.65 dB at 7.54 ns after excitation
of the signal.
700 K. Satyaprasad et al.
(a)
(b)
Fig. 9 a Double exponential signal pulse parameters b S11 and S22 for double exponential time
domain signal input
4 Conclusions
The fabrication of open TEM cell and the handling of EUT with open TEM cell are
simple when compared to those of GTEM cell. The TEM cell can be used for radiated
emission and immunity testing of small products. The study carried out using pulse
transients will be helpful in EMP immunity test.
The operating frequency range of the TEM model can be extended to 2 GHz by
suppressing the higher order modes. The study and identification of higher order
modes in the TEM cell will be helpful in finding methods to nullify the higher order
modes in the TEM cell. Thus, the operating range of the TEM cell can be extended.
Design and Validation of Transverse Electromagnetic … 701
References
1. Crawford ML (1974) Generation of standard EM fields using TEM transmission cells. IEEE
Trans Electromagn Compat 16(4):189–195
2. Tippet JC, Chang DC (1976) Radiation characteristics of dipole sources located inside a rectan-
gular, coaxial transmission line. NBSIR, 75-829
3. Satav SM, Agarwal V (2008) Do-it-yourself fabrication of an open TEM cell for EMC pre-
compliance. IEEE
4. Iftode C, Miclaus S (2012) Design and validation of a TEM cell used for radiofrequency dosi-
metric studies. Prog Electromagn Res 132:369–388
5. Anderson GM (1950) The calculation of the capacitance of co-axial cylinders of rectangular
cross-section. AIEE Trans 69
Intensity Inhomogeneity Correction
for Magnetic Resonance Imaging
of Automatic Brain Tumor Segmentation
Abstract Automatic segmentation of brain tumor data is a very important task for
all medical image processing applications, especially in the diagnosis of cancer. This
work deals with some of the challenging issues such as noise sensitivity, partial vol-
ume averaging, intensity inhomogeneity, inter-slice intensity variations, and intensity
non-standardization. To deal with the above tasks, this work uses the 3D convolutional
neural network (3DCNN) for automatic segmentation and a novel N3T-spline inten-
sity inhomogeneity correction for bias field correction. The proposed work consists
of four levels: (i) preprocessing, (ii) feature extraction, (iii) automatic segmentation,
and (iv) postprocessing. In the first stage, a novel N3T-spline is suggested to cor-
rect the bias field distortion for reducing the noises and intensity variations. For the
extraction of texture patches, the extended gray level co-occurrence matrix-based
feature extraction is used. Then, the proposed 3D convolution neural network auto-
matically segments the brain tumor and divides the various abnormal tissues. Finally,
a simple threshold scheme is applied to the segmented results for correcting the false
labels and to eliminate the 3D connected small regions. The simulation results in the
proposed segmentation approach could attain competitive performance as compared
with the existing approaches for the BRATS 2015 dataset.
1 Introduction
Medical image processing approaches are developed fast in modern years. Medical
image capturing and storing can be digitalized by recent techniques, and it consumes
the time of medical image processing [1]. In several image processes and computer
vision applications, image segmentation is the major task. Segmentation of the human
brain tumor is mainly focused on dividing the abnormal tissues from normal tissues
of the brain. A region of an image is separated according to the state of relevant
anatomic innovation in the surgical image processing step.
Gliomas are ordinary and most important brain tumors in adults, which arise from
glial cells and influence the related tissues [2]. The important suggestion in glioma
examination is inadequate in patient diagnosis. People having an average durability
rate of 2 years or inferior are acquainted with the extreme influence of the disease
known as high-grade gliomas, which need immediate treatment [3, 4], whereas the
slow influence of the disease is called as low-grade astrocytomas or oligodendria.
To achieve the progress of the disease, intensive neuroimaging protocols were used
before and after the treatment [5, 6].
The magnetic resonance imaging (MRI) technique is used to identify the peculiar
variation in the different areas of the brain in the first level. MRI images have fine
contrast than computerized tomography (CT). Many examinations in medical image
segmentation use complex and dispute MRI images [10–13]. The proper division of
MRI is essential for identifying necrotic tissues, tumors, and edema, and it is also
essential for the diagnostic system to discover the tissues [14, 15].
Many approaches are applied for automatic and semiautomatic image segmen-
tation, but they flop in medical images because of unnecessary sound, less image
difference, inhomogeneity, and poor border [7–9]. Medical images have a complex
form, but its fine division is applied for clinical diagnosis [16, 17]. In previous
methods, the vision of brain MRI segmentation is demonstrated. Two division of
algorithms are commonly used: supervised methods (Bayes classifiers, the k-nearest
neighbor rule (KNN) [18], and artificial neural networks (ANN)) [19, 20] and unsu-
pervised methods (i.e., fuzzy C-means (FCM) algorithms). Some other algorithms
involve the validation, preprocessing, and registration between different MR images
for soft brain tissue segmentation.
The important challenging job of brain tumor segmentation in MRI images is
noise reduction, inhomogeneity correction, and automatic segmentation. The goal of
segmentation of tumors in the human brain is to separate the peculiar tissues (necrotic
core, edema, and active cells) from common tissues (gray matter, cerebrospinal fluid,
and white matter) of the brain.
A novel three-dimensional convolutional neural network (3DCNN) for automatic
brain MRI segmentation and N3T-spline intensity inhomogeneity correction for bias
field correction. The three stages of processing are preprocessing, automatic seg-
mentation, and post-processing. The MRI images differ according to the bias field
correction, and it varies the intensity of the same tissues so a famous method which
is known as intensity inhomogeneity correction called the N3 (nonuniformity non-
Intensity Inhomogeneity Correction for Magnetic Resonance … 705
parametric normalization) is used. This approach is continual and examines the soft
multiplicative field which develops the high-frequency capacity for the arrangement
of tissue volume, and it is completely automated, which does not need any prior
information and can be suited to every MR image. This method cannot be suggested
for improvements because it is more popular and successful. The original N3 algo-
rithm can be maintained by changing the B-spline smoothing strategy used in the
original N3 framework with a favorable substitute of T-spline which deals the eval-
uation studies of N3. The demerits of the existing deep learning-based segmentation
methods are that they do not train in an end-to-end manner, they have fixed size
input and output, they have the small/restricted field of vision, and poor adminis-
tration that is time-consuming. The suggested hybridization of low convolutional
neural networks and totally combined conditional random field to produce accurate
predictions and detailed segmentation maps. They help to solve the demerits. The
final stage post-processing activity is through a threshold.
The major significant contribution of the proposed work is given as follows:
(1) A novel N3T-spline inhomogeneity correction to overcome intensity variations
and noise reduction.
(2) EGLCM feature extraction to assess, evaluate, and produce accurate predictions
and detailed segmentation maps.
(3) Robust automatic 3DCNN deep learning-based segmentation divides various
peculiar tissues (necrotic core, edema, and active cells) from common tissues.
2 Proposed Methodology
The proposed automatic brain tumor segmentation work shown in Fig. 1 is divided
into three stages as follows: (i) preprocessing stage, (ii) feature extraction, (iii)
segmentation using a deep learning model, and (iv) postprocessing.
The experimental results were simulated in MATLAB R2017a. The exactness of the
anticipated 3DCNN segmentation scheme is estimated with the standard BRATS
2015 dataset. A group of analytical scope has been applied to estimate the seg-
mentation consequence. The performance measures used to estimate the “BRATS”
segmentation results include sensitivity, Jaccard, matching, specificity, positive pre-
dictive value (PPV), dice similarity coefficient (DSC), and accuracy.
706 G. Anand Kumar and P. V. Sridevi
Training
Dataset
= Inputdata +
T1
Thresholding
T1c based Post-
processing
4 Imaging Data
The suggested approach is analyzed with the BRATS 2015 database. In the database,
there are four MRI sequences such as T1, T1c, T2, and FLAIR. The training set
is comprised of 220 images on HGG and 54 images on LGG. This dataset is a
challenging dataset. During manual segmentation, there are four types of classes that
are defined as intra-tumoral classes. The first class will be the enhancing core. Next
is the center which can be stated with a combination of necrosis, enhancing tumor,
and non-enhancing tumor. Then, finally, all the classes combined to form the entire
tumor.
Figure 2a shows the MRI input image, and then that image is bias-corrected and
the tumor area is shown in figure (c) and the segmented area results are shown in
figure (d). The final segmentation result is illustrated in (e), the necrotic core, edema,
and active cells with color representation. The yellow color represents the edema,
pink color represents the enhancing core, and blue color represents the necrotic core.
Figures 3, 4, 5, 6, and 7 show the 3D plot for the segmented output. Because
of the difficulty of the brain tissue and various types of tumors in the brain, the
supervised estimation is being acknowledged in this learning. For the estimation of
segmentation consequence, a group of analytical scope is used. The metrics used to
estimate the performance of segmentation are false positive (FP), true positive (TP),
false negative (FN), and true negative (TN).
Intensity Inhomogeneity Correction for Magnetic Resonance … 707
Fig. 2 Segmentation results for collected dataset: a input image, b bias-corrected image, c tumor
detected area, d segmented tumor, and e a final segmented tumor
TP defines the sum of pels which were properly divided as a part of a tumor;
FP describes the sum of pels which were improperly divided as part of the tumor;
FN describes the sum of pels which were improperly divided as active pixels; and
708 G. Anand Kumar and P. V. Sridevi
finally, the divided active pixels are stated as TN, which is the average of properly
segmented picture elements to the entire number of pixel elements. The proportion of
correctly segmented tumor pixels is specified as sensitivity. The correctly segmented
proportion of the non-tumor region is referred as specificity.
5 Conclusion
Along these lines, the paper proposes a deep 3DCNN for automatic segmentation
of MRI brain information. This model is intended to get tumor segmentation out-
comes with appearance and spatial consistency. The first stage of processing is the
Intensity Inhomogeneity Correction for Magnetic Resonance … 709
ated for the anticipated system on BRATS dataset, which prevails that the 3DCNNs
accomplish computational proficiency and take total advantage of the position for
3D data of MRI information. Our test outcomes have shown that these procedures
could enhance the tumor segmentation execution.
References
1. Balafar MA, Ramli AR, Saripan MI, Mashohor S (2010) Review of brain MRI image segmen-
tation methods. Artif Intell Rev 33(3):261–274
2. Khotanlou H, Colliot O, Atif J, Bloch I (2009) 3D brain tumor segmentation in MRI using
fuzzy classification, symmetry analysis and spatially constrained deformable models. Fuzzy
Sets Syst 160(10):1457–1473
3. Ahmed MN, Yamany SM, Mohamed N, Farag AA, Moriarty T (2002) A modified fuzzy c-
means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans Med
Imaging 21(3):193–199
4. Zhang N, Ruan S, Lebonvallet S, Liao Q, Zhu Y (2011) Kernel feature selection to fuse multi-
spectral MRI images for brain tumor segmentation. Comput Vis Image Underst 115(2):256–269
5. Hall LO, Bensaid AM, Clarke LP, Velthuizen RP, Silbiger MS, Bezdek JC (1992) A comparison
of neural network and fuzzy clustering techniques in segmenting magnetic resonance images
of the brain. IEEE Trans Neural Netw 3(5):672–682
6. Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Lanczi L et al (2015) The
multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imaging
34(10):1993–2024
7. Shen S, Sandham W, Granat M, Sterr A (2005) MRI fuzzy segmentation of brain tissue using
neighborhood attraction with neural-network optimization. IEEE Trans Inf Technol Biomed
9(3):459–467
8. Dou W, Ruan S, Chen Y, Bloyet D, Constans JM (2007) A framework of fuzzy informa-
tion fusion for the segmentation of brain tumor tissues on MR images. Image Vis Comput
25(2):164–171
Intensity Inhomogeneity Correction for Magnetic Resonance … 711
9. Ho S, Bullitt E, Gerig G, Level-set evolution with region competition: automatic 3-D segmenta-
tion of brain tumors. In: Pattern Recognition, 2002. Proceedings. 16th International Conference
on 2002, vol 1. IEEE, pp 532–535
10. Gordillo N, Montseny E, Sobrevilla P (2013) State of the art survey on MRI brain tumor
segmentation. Magn Reson Imaging 31(8):1426–1438
11. Prastawa M, Bullitt E, Ho S, Gerig G (2004) A brain tumor segmentation framework based on
outlier detection. Med Image Anal 8(3):275–283
12. Kapur T, Grimson WE, Wells WM, Kikinis R (1996) Segmentation of brain tissue from mag-
netic resonance images. Med Image Anal 1(2):109–27
13. Fletcher-Heath LM, Hall LO, Goldgof DB, Murtagh FR (2001) Automatic segmentation of
non-enhancing brain tumors in magnetic resonance images. Artif Intell Med 21(1):43–63
14. Prastawa M, Bullitt E, Moon N, Van Leemput K, Gerig G (2003) Automatic brain tumor
segmentation by subject specific modification of atlas priors. Acad Radiol 10(12):1341–1348
15. Mazzara GP, Velthuizen RP, Pearlman JL, Greenberg HM, Wagner H (2004) Brain tumor target
volume determination for radiation treatment planning through automated MRI segmentation.
Int J Rad Oncol Biol Phys 59(1):300–312
16. Pereira S, Pinto A, Alves V, Silva CA (2016) Brain tumor segmentation using convolutional
neural networks in MRI images. IEEE Trans Med Imaging 35(5):1240–1251
17. Xie K, Yang J, Zhang ZG, Zhu YM (2005) Semi-automated brain tumor and edema segmen-
tation using MRI. Eur J Radiol 56(1):12–19
18. Zou KH, Warfield SK, Bharatha A, Tempany CM, Kaus MR, Haker SJ, Wells WM, Jolesz
FA, Kikinis R (2004) Statistical validation of image segmentation quality based on a spatial
overlap index 1: scientific reports. Acad Radiol 11(2):178–189
19. Kaus M, Warfield SK, Jolesz FA, Kikinis R (1999) Adaptive template moderated brain tumor
segmentation in MRI. In: Bildverarbeitungfür die Medizin 1999. Springer, Berlin, Heidelberg,
pp 102–106
20. Clarke LP, Velthuizen RP, Clark M, Gaviria J, Hall L, Goldgof D, Murtagh R, Phuphanich S,
Brem S (1998) MRI measurement of brain tumor response: comparison of visual metric and
automatic segmentation. Magn Reson Imaging 16(3):271–279
Periocular Region-Based Age-Invariant
Face Recognition Using Local Binary
Pattern
Abstract The performance of the biometric face schemes suffers severely due to
the variation in the subject’s aging. Designing the face recognition systems which are
invariant to the aging process is challenging as the age patterns are different for the
different individuals and also limited databases are available. The aging-based face
recognition is still an open challenge for researchers as none of the existing methods
are on par with human ability in recognizing the similarity across two faces. In the
proposed paper, the age-invariant features of the face are derived using the local
descriptor, local binary pattern (LBP). Preprocessing techniques like enhancement
and denoising are applied to the images to enhance the accuracy of the designed sys-
tem. Chi-square distance is used as a classifier to find the matching score between two
feature vectors of the probe and gallery images on four unique, challenging datasets.
Publicly available face datasets such as FG-Net, FRGC, FERET, and Georgia Tech
are used for the experimentation, and the results prove that the proposed system is
robust to the changes in age and outperforms most of the existing systems.
1 Introduction
Under the controlled environment, the automated face recognition systems [1] pro-
vide better recognition rates, and the accuracies of these methods will be severely
affected by the variations in the aging patterns of the subject, which is a natural phe-
nomenon and is uncontrollable. Aging causes the changes in the appearance reducing
the usability of the available facial databases. A biometric system which is invariant
to the aging process is still a new and open challenge for the researchers because
of its applications like forensic science and identification of missing subjects which
require the systems to be invariant to the aging process. The primary constraint in
the design of these systems is the nonavailability of a good aging database. In the
unconstrained scenarios, periocular region [2] is gaining significance as the biomet-
ric trait as it is the best discriminating section of the face with better robustness, its
flexibility, and easy to acquire the images in the adverse conditions. The periocular
area of the face region is in the vicinity of an eye region.
2 Literature Review
Park et al. [2] investigated the discriminating capability of the periocular region of
the face as a helpful biometric feature. Feature extraction methods employed are
local descriptors, and Euclidean distance is used as a classifier. Park et al. [3] had
presented a 3D modeling system to balance the age variations in the face, which
had improved the recognition rates. Mahalingam and Kambhamettu [4] performed
the age-invariant recognition with a graph-based method, which is constructed using
feature points with the vertices of the facial image. Woodard et al. [5] explored the
use of periocular region as a biometric attribute. The periocular image is divided
into blocks, and the respective feature vectors are generated for these blocks, and
various distance measures are used as classifiers. Lyle et al. [6] explored the use of
periocular region for soft biometric classification. Juefei-Xu et al. [7] proposed the
Walsh–Hadamard transform encoded local binary patterns to extract age-invariant
features based on the periocular region. Unsupervised discriminate projection is used
for classification of the extracted feature vectors.
3 Proposed Methodology
With the aging, the person’s face experiences a lot of variations which are affecting
the performance of the biometric systems. A periocular region is consistent across
ages when compared to face, and this region is used in the design of biometric systems
which are invariant to age [8, 9] for extracting the age-invariant features. Figure 1
shows that the periocular region is consistent across ages. Periocular region-based
face recognition increases the processing speed and reduces the memory requirement
as the periocular area occupies only 25% of the face template.
The proposed method uses the local binary pattern (LBP) on the periocular region
[10] for extracting the age-invariant discriminative features, and chi-square distance
is employed for matching the feature vectors of probe image with the gallery images.
Three individual LBP feature vectors are generated from the ROI image, enhanced
Periocular Region-Based Age-Invariant Face Recognition … 715
image, and denoised images, which are compared with three gallery feature vectors
to produce the matching score as shown in Fig. 2.
Preprocessing techniques like enhancement and denoising are applied to the raw
images to improve the precision of the designed scheme. In the proposed paper, the
enhancement technique employed is self-quotient enhancement [11], which com-
prises estimation of illumination and the illumination subtraction reduces the effect
of illumination variations. Discrete wavelet transform is the denoising technique used
to eliminate the noise present in the images.
716 K. Kishore Kumar and M. Pavani
Ojala [12] proposed the texture classification technique, namely, local binary pattern
(LBP). It collects the texture information from an image into a feature vector by
labeling pixels with a binary number by placing a threshold on the neighborhood
around each pixel. A histogram of these values forms the output feature vector.
The LBP value of the pixel of concern Qk is a function of intensity changes in the
neighborhood of M sampling points on a circle of radius r, and then the LBP operator
[13] is given by
M −1
LBPM ,r s(gn − gc )2n (1)
n0
gc is the intensity of the concerned pixel at the center with the pixels on the circum-
ference of a circle with values of gn , where n 0, . . . , M − 1.
Figure 3 shows the framework for the feature extraction with the following
sequences of steps: (i) extraction of region of interest (ROI), (ii) enhancement of
image, and (iii) removal of noise.
Three individual LBP histograms are obtained from the (i) cropped ROI, (ii) enhanced
ROI, and (iii) denoised ROI. For a given image, these histograms are fused into a
single feature vector and are compared with the corresponding features from the
images of the database.
Final distance between two images, probe IP and gallery IG images, match score
generation is given by
Periocular Region-Based Age-Invariant Face Recognition … 717
where α1 , α2 , and α3 are the weights of cropped ROI image, enhanced ROI, and
denoised ROI images, respectively.
The proposed algorithm’s performance is verified against the existing databases such
as FG-NET, FRGC, FERET, and Georgia Tech [14]. These particular databases are
publicly available with variations in pose, occlusion, expression, illumination, etc.,
718 K. Kishore Kumar and M. Pavani
Table 1 Proposed scheme Rank-1 accuracies when tested on various face databases
Face database Rank-1 accuracy
FRGC database 88.02
FG-NET database 82.75
FERET database 80.73
Georgia Tech database 64.39
taken under unconstrained conditions [15]. The proposed methodology is carried out
in stages using MATLAB software. The pixel-based LBP is used as local descriptors
on the periocular region [16] for the feature extraction of discriminative features,
and chi-square distance is used as a classifier to find the matching score between
two feature vectors on four unique, challenging datasets. Rank-1 recognition rate is
the performance measure of the scheme in classifying the best possible match for a
given image. Table 1 shows the proposed system Rank-1 accuracies when tested on
various face databases.
FRGC database provides relatively higher Rank-1 accuracies when compared
with other databases. It consists of frontal face images of the high-quality resolution
taken under controlled lighting conditions. It consists of large face images resulting
in larger periocular region images improving the recognition rates. A comparison
between individual accuracies and the combined accuracy is shown in Table 2.
Based on the results obtained, the chi-square distance method is chosen to calcu-
late the variations among the probe and gallery LBP histograms as given in Table 3.
For the FRGC dataset, Rank-1 accuracies of LBP method are compared with
another local descriptor WLD method. LBP method showed better performance
Periocular Region-Based Age-Invariant Face Recognition … 719
when compared with Weber local descriptor for the periocular regions [17] as shown
in Table 4.
5 Conclusion
In the proposed paper, the pixel-based LBP is used as local descriptors on the periocu-
lar region for the feature extraction of discriminative features, and chi-square distance
is used as a classifier to find the matching score between two feature vectors on four
unique, challenging datasets. Preprocessing techniques like self-quotient enhance-
ment method, DWT, and denoising techniques are applied to images to enhance
the accuracy of the designed system. FG-NET, FRGC, FERET, and Georgia Tech
databases are employed to establish the performance of periocular biometric modal-
ity. With limited data availability, the periocular region has the similar performance
when compared with biometric face systems.
References
10. Joshi A, Gangwar A, Sharma R, Saquib Z (2012) Periocular feature extraction based on LBP
and DLDA. In: Advances in computer science, engineering & applications, volume 166 of
advances in intelligent and soft computing. Springer, pp 1023–1033
11. Tandon A, Gupta P (2014) An efficient age-invariant face recognition. In: International con-
ference on software intelligence technologies and applications & international conference on
frontiers of internet of things 2014. Hsinchu, pp 131–137
12. Ojala T, Pietikainen M, Maenpaa T (2001) A generalised local binary pattern operator for
multiresolution gray-scale and rotation invariant texture classification. In: Second international
conference on advances in pattern recognition, pp 397–406
13. Mahalingam G, Ricanek K (2013) LBP-based periocular recognition on challenging face
datasets. EURASIP JIVP 2013(1):1–13
14. Kumar KK, Trinatha Rao P (2016) Face verification across ages using discriminative methods
and see 5.0 classifier. In: 1st international conference on information and communication
technology for intelligent systems: Springer SIST Series, vol 51. Springer, Cham, pp 439–448
15. Kumar KK, Trinatha Rao P (2018) Periocular region based biometric identification using the
local descriptors. In: Intelligent computing and information and communication. advances in
intelligent systems and computing, vol 673. Springer, Singapore, pp 341–351
16. Kumar KK, Trinatha Rao P (2018) Biometric identification using the periocular region. In:
2nd international conference on information and communication technology for intelligent
systems, Springer SIST Series, vol 84. Springer, Cham, pp 619–628
17. Kumar KK, Pavani M (2017) LBP based biometric identification using the periocular region. In:
IEEE 8th annual information technology, electronics and mobile communication conference
(IEMCON). Vancouver, BC, pp 204–209
Extraction of Lesion and Tumor Region
in Multi-modal Images Using Novel
Self-organizing Map-Based Enhanced
Fuzzy C-Means Clustering Algorithm
Abstract Analyzing the medical images and segmenting the same for detecting the
tumor and lesion regions embedded within the images are quite a tedious process.
On performing the task of tumor and lesion region detection, several intricacies arise
and two of the major hindrances are time complexity and accuracy level sustainment.
Resolving these two issues is the major concern of this paper and the authors have
achieved it, which could be verified from the figures of this paper. If the examination
of the medical images obtained through modalities such as MRI and CT is clearly
processed using an algorithm, preplanning of surgical procedures could be made
with ease. The development of such an algorithm is focused by the authors, and
the algorithm framed in this research ensemble the working of self-organizing map
(SOM) and enhanced fuzzy C-means (EnFCM), and the authors have collectively
named the algorithm as SOM-based EnFCM. The proposed algorithm has produced
a high peak signal-to-noise ratio (PSNR) value of 60 dB and mean square error
(MSE) of 0.06. The time required by the algorithm for processing 71 input slice
images acquired through CT and MRI scans is around 6 s, and the overall accuracy
exhibited by the algorithm is 48%. This has given a new and a dynamic approach,
which could be greatly used by the radiologists in clinical practices. To contest and
prove the efficiency of the SOM–EnFCM algorithm, the segmentation results of
SOM and EnFCM algorithms while operating individually are compared.
1 Introduction
In the current scenario, for extensive medical research and effective radiotherapy to
occur, MRI and CT scanners are used in abundance. Complex tissue structures and
abnormal regions could be effectively identified by the abovesaid scanners. In spite
of the usage of the abovesaid modalities, there arises a critical situation, when the
abnormal region cannot be identified by a human operator. To downsize the effect
of the abovesaid hardship, several automated algorithms were proposed. Some of
the methodologies, which have set the benchmark in the field of medical image
analysis, have been deeply investigated. Bai et al. [2] introduced an improved FCM
algorithm in which the objective function extensively depends upon the mean of the
local neighborhood. Zhang et al. [16] suggested the nonlocal information of neigh-
bor function, which was established using improved FCM algorithm. Das and Da [5]
used the fuzzy C-means algorithm for medical image segmentation. The algorithm
performs the processing of the input image using the crossover and mutation process
of the modified genetic algorithm through which the cluster center is initialized, and
the final segmentation result is derived. Vishnuvarthanan et al. [15] used PSO-based
FCM along with region growing algorithm to perform precise tumor region detec-
tion, which invariably requires larger processing time. Vishnuvarthanan et al. [13]
developed a bacteria foraging optimization-based modified fuzzy K-means algo-
rithm, which requires reduced time for providing the segmentation results. Torbati
et al. [12] suggested a semi-supervised algorithm named as moving average SOM
algorithm to perform the clustering processes, which provides effective segmentation
result with the aid of minimal human intervention. Lopez-Rubio et al. [8] introduced
an SOM neural network that relies on dynamic spanning tree structure, in which the
learning process requires more processing time for producing efficient segmentation
result. Vishnuvarthanan et al. [14] introduced a novel SOM-based FKM algorithm
that helps to identify the heterogeneous tumor region in MR brain images. The author
suggested that brain extraction tool (BET) is used for preprocessing of input medical
images that significantly require human interaction. Demirhan et al. [6] introduced
SOM-based LVQ approach that helps to analyze the MR brain images. The algo-
rithm has the capability to produce efficient segmentation results in minimal human
interaction. Helmy et al. [7] used a modified pulse-coupled neural network algorithm,
which is certainly an advance process under supervised clustering. In order to achieve
effective segmentation results using the modified pulse-coupled neural network, the
automatic cluster center is derived using SOM algorithm. Ortiz et al. [10] introduced
a novel EGS-SOM algorithm, in which cluster center is derived using the genetic
algorithm. Moreover, the author improved the SOM performance by reducing both
the quantization error and topological error, which deals with the modification of
objective function. Aghajari and Chandrashekhar [1] introduced a novel SOM-based
Extraction of Lesion and Tumor Region in Multi-modal … 723
extended FCM algorithm, which relies upon the statistical feature for performing the
clustering operation.
From the literature survey, it is clear that an algorithm capable of segmenting
multimodal images of different organs in the patient’s body is unavailable. This
specific reason has made the authors to develop the algorithm named SOM–EnFCM
in this paper, which essentially meets the necessities and expectations of a radiologist.
The structure of this research paper is designed as follows: Sect. 2 describes
the proposed algorithm, and Sect. 3 elaborates the results and discussion. Section 4
explains the conclusion of the proposed algorithm.
2 Proposed Methodology
Subsequently, the best matching function of nearest neighbors has been found out
using the weight vector of SOM prototype wi , which can be updated for effective
clustering process. The prototype formation of the given input image depends upon
the time, which is related to the exponential decay of Gaussian function. The updated
SOM prototype is specified in Eq. (2).
724 S. Vigneshwaran et al.
Successively, the updated SOM prototype has been measured with the aid of learning
factor and Euclidean distance. The updated SOM prototype wi plays a significant
role in dimensionality reduction, which augments the clustering process. Moreover,
EnFCM algorithm obtains inputs from the updated SOM prototype wi . Generally, the
EnFCM algorithm has the highest data handling capability and has great performance
in various data ranges. On the other hand, if the number of clusters and the number
of iterations continue to increase, the convergence rate will be affected. Reducing
the number of clusters and iterations leads to an unfavorable effect upon the seg-
mentation accuracy; meanwhile, the convergence rate is quicker. To overcome these
hindrances, SOM and EnFCM are combined together and presented in this paper,
which is declared to be a novel segmentation algorithm. The EnFCM algorithm was
designed by Szilagyi et al. [11], which performs the clustering process based on the
spatial information present in the input medical image. The objective function of
EnFCM algorithm is defined in Eq. (3).
c
N
J (U, M) γi (u ik )m ξi − wi 2 (3)
i1 k1
N
γi N (4)
k1
N
(u ik )m 1∀i (5)
k1
Here, γi refers to the gray-level values of k pixels that are obtained from Eq. (4).
The maximum membership value in Eq. (5) is provided by the efficacious clustering
process for EnFCM algorithm. Let ξi represent the weighted sum of the local neigh-
bors that has the gray-level information of input image. The value of ξi is described
in Eq. (6).
⎛ ⎞
1 ⎝ α ⎠
ξi xi + xj (6)
1+α N R j∈N
i
Here in Eq. (6), α determines the nearest neighbors, and Ni mentions the group
of nearest neighbors. Chen and Zhang [3] recommended the clustering process to
reduce the computational time for obtaining the nearest neighbors using similar
neighborhood function. The second term “ N1R x R ∈Nk xi ” of Eq. (6) that denotes the
mean of local neighbor function has been replaced with the notation “xi ”. Now, the
value of ξi is represented in Eq. (7).
1
ξi (xi + α ∗ xi ) (7)
1+α
Extraction of Lesion and Tumor Region in Multi-modal … 725
(ξi − wi )2/m−1
u ik N
(9)
k1 (ξi − wk )
2/m−1
N
k1 γi (u ik ) ∗ ξi
m
vx N
(10)
k1 γi (u ik )
m
The proposed algorithm performs the segmentation of the images, which have been
obtained from BRATS 2013 [9], MIDAS [4], and clinical datasets. The segmenta-
tion of results of input slice images with heterogeneous tumor types acquired from
different datasets is illustrated in Fig. 2. Table 1 discloses the performance ability
of the SOM-based EnFCM algorithm, which utilizes the local nearest neighborhood
function. 71 input slice images of different datasets and modalities are segmented by
the proposed algorithm, and its segmentation efficiency is shown in Table 2.
In Table 2, evaluation of the segmentation efficiency of the proposed algorithm
and various traditional algorithms is portrayed. All the input images used in this paper
are noise free. The validation of the algorithms is done using the quality parameters
such as MSE, PSNR, Jaccard index, and dice overlap index (DOI). The SOM-based
EnFCM algorithm that governs the local neighborhood function performs well and
its segmentation efficiency is substantiated in Fig. 2. Figure 2 concisely explains the
726 S. Vigneshwaran et al.
Low
grade
image
Sagittal
Clinical dataset
Axial
Liver 1
MIDAS dataset
Liver2
Fig. 2 Comparison of segmentation results obtained using different automated soft computing
algorithms
4 Conclusion
In this paper, the authors have developed a novel SOM-based EnFCM algorithm for
tumor and lesion identification through segmentation process. The functioning of
SOM-based EnFCM algorithm delivers successful segmentation results, which are
compared with EnFCM algorithm and SOM-based FKM algorithm. The PSNR value
provided by the proposed algorithm is better than the values offered by EnFCM and
SOM-based FKM algorithms. When compared to the traditional algorithm, the DOI
value produced by the proposed methodology is quite higher, and it indicates good
segmentation accuracy. So, the proposed SOM-based EnFCM algorithm functions
to deliver precise identification of tumor and good segmentation of tissue region. In
future, the functioning of the proposed methodology could be fine-tuned so that the
utilization of the algorithm could be extended to segment the noise-filled multimodal
images.
Acknowledgements The authors thank Dr. K.G. Srinivasan, MD, RD, Consultant Radiologist and
Dr. K.P. Usha Nandhini, DNB, KGS Advanced MR & CT Scan—Madurai, Tamilnadu, India, for
supporting the research with the patient information. Also, the authors thank the Department of
Electronics and Communication Engineering of Kalasalingam Academy of Research and Educa-
tion, Tamilnadu, India, for permitting to use the computational facilities available in Centre for
Research in Signal Processing and VLSI Design, which was set up with the support of the Depart-
ment of Science and Technology (DST), New Delhi under FIST Program in 2013 (Reference No:
SR/FST/ETI-336/2013 dated November 2013).
References
1 Introduction
Microfluidics [1, 2] deals with the mixing, reacting, and analyzing small volumes of
fluids that are usually in the range of microliters to picoliters, carried out in micron-
sized channels. Fluid delivery plays an important role in all microfluidic systems
A. Sailaja · K. V. Ramesh
Department of Chemical Engineering, Andhra University, Visakhapatnam, India
e-mail: sailaja_kruttika@yahoo.co.in
K. V. Ramesh
e-mail: kvramesh69@yahoo.com
B. Sreenivasulu (B) · B. Srinivas
Department of Chemical Engineering, GVP College of Engineering (Autonomous),
Madhurawada, Visakhapatnam 530048, India
e-mail: bslu@rediffmail.com
B. Srinivas
e-mail: bsrini_123@rediffmail.com
since the operating pressure involved is very high. Micropumps are difficult to use
because of their moving parts and are prone to mechanical failure making them
unsuitable for microfluidic applications. To meet the pumping requirements in these
microdevices, electroosmotic pumping has been favored due to its many advantages
over other types of micropumps. Electroosmotic pumps involve no moving parts
and have much simpler design and are easier to fabricate. Another advantage is the
precise flow control that can be achieved by applying an external electric field.
Most of the existing works [2] have assumed the walls of the microchannel to be
having identical zeta potentials. While this is normal, there may be situations where
the top and the bottom walls may be of different materials. This would arise in cases
when one wall is made of silicon dioxide (glass) and the other of polydimethyl-
siloxane (PDMS). Even if the two walls are of similar material they could be of
different zeta potentials. Deliberate asymmetry in zeta potentials has been utilized to
achieve better control and mixing of fluids [3, 4]. Heterogeneity can also be caused
by defects during fabrication or probable alterations of the surface characteristics
due to the adsorption–desorption mechanisms of certain species from solution in
lab-on-chip devices. This heterogeneity could well play a significant role in mass
transfer and surface reactions in microfluidic devices [5]. Hence, it is important to
be able to predict beforehand the modifications these heterogeneities could cause
during the flow of these microdevices.
Heterogeneous wall microchannel has been studied [6–8]. Recently, there have
been EOF studies on the flow of non-Newtonian fluids in channels with asymmetric
walls [9–13].
d 2ϕ
K 2ϕ (1)
dY 2
zeξ B
Y −1, ϕ ϕB ; (2a)
kB T
zeξT ξT zeξ B
Y 1, ϕ ξr ϕ B (2b)
kB T ξB k B T
where ξr is the ratio of wall potentials. The momentum equation [14] has only the
component along the flow direction and is given below in the dimensionless form:
d 2U
−A + Bϕ (3)
dY 2
Here, we have made the velocity dimensionless with the mean velocity, Um , and
identified two dimensionless parameters A and B as
H2 d P 2n 0 εz H 2 dψ
A ,B
μUm dy μUm d x
Integrating the momentum equation inside the channel, we get the velocity profile
as
A B cosh(K Y )
U − (1 − Y 2 ) + 2 ξwall (1 − ) (4)
2 K cosh(K )
Because we have made the velocity dimensionless with the mean velocity, the velocity
distribution integrated inside the channel must equal unity and this gives the relation
between the dimensionless parameters A and B as
732 A. Sailaja et al.
2K 2 A−3
B (5)
3ϕ B (1 + ξr ) 1 − tanh(K
K
)
From this equation, we can easily see that the pure pressure-driven flow (PDF) is
possible if A 3 and pure electrokinetic flow (EKF) is possible if A 0. For values
0 < A < 3, we get pressure-assisted electrokinetic flow. By defining this way, we can
compare the PDF and EKF on the same velocity basis.
The energy balance equation neglecting viscous dissipation effects is given in
[14].
The dimensionless energy balance becomes
d 2θ
+10 (6)
dY 2
The boundary conditions then become
θ t at Y 1; (7a)
θ 0 at Y −1 (7b)
where t (TσHE−T c )k
2 H 2 , which is the dimensionless temperature of the top wall. On
integrating the energy balance, we get the temperature profile inside the microchannel
as
1 − Y2 t
θ (Y ) + (1 + Y ) (8)
2 2
Once the velocity and temperature profiles have been obtained, we can then calculate
the quantities of interest for engineering purpose, namely, the friction factor and
the Nusselt numbers. The friction factor is defined as usual [14] by the following
equation:
2τw
f
ρUm2
dθ
Substituting the relation ( dY )Y −1 t+2
2
obtained from the energy balance, we
finally get the relation for the Nusselt number as
2(t + 2)
Nu (10)
θb
Analytical results are possible since the governing equations are linear and we now
present these in this section. The potential distribution inside the EDL is given by
the relation
Substituting this relation in Eq. (4), we get the final velocity profile as
U (Y )
AK 2 (Y 2 − 1) + Bϕ B (Y − 1) − Bξr ϕ B (1 + Y ) + Bϕ B (1 + ξr ) cosh(K Y ) + Bϕ B (ξr − 1) cos ech(K ) sinh(K Y )
2K 2
(12)
Substituting these in the scaled friction factor relation, we get the following equation
for the friction factor:
f Re − 24 (1 − ξr )(1 − tanh(K
K
)
) − (1 + ξr )K tanh(K )
1+ (14)
8( A − 3) 3(1 + ξr )(1 − tanh(K
K
)
)
and as we see later in the results and discussion section, this approximation is very
good for all K > 5.
The bulk temperature equation is very bulky and we only present the results for
specific cases of ξr −0.5, −0.25, 0.25, 0.5, and 1.
1 T1 + T2 9 90
ξr : θb ; T1 30(3 − 2 + 4t) + A(6 + 2 + 5t);
2 270 K K
5( A − 3)(t − 6) 15( A − 3)t coth(K )
T2 −
K coth(K ) − 1 K
1 T3 + T4 9 30
ξr : θb ; T3 30 − 2 + 36t + A(2 + 2 + 3t);
4 90 K K
( A − 3)(3t − 10) 9( A − 3)t coth(K )
T4 −
K coth(K ) − 1 K
1 T5 + T6 90 30
ξr − : θb ; T5 30 − 2 + A(2 + 2 + 15t);
2 90 K K
5( A − 3)(3t − 2) 45( A − 3)t coth(K )
T4 −
K coth(K ) − 1 K
1 T7 + T8 9 90
ξr − : θb ; T7 30(3 − 2 + 2t) + A(6 + 2 + 25t);
4 270 K K
5( A − 3)(5t − 6) 75( A − 3)t coth(K )
T8 −
K coth(K ) − 1 K
Once the bulk temperatures are known, we can calculate the Nusselt number given
in Eq. 10.
In the last section, we had derived the analytical results for the potential, velocity,
and temperature distribution inside the microchannel whose walls are maintained
at unequal zeta potentials. The asymmetry in the wall zeta potentials is measured
by the parameter, ξr , which could take any value from positive to negative. The
specific case of ξr 1 is the symmetric microchannel, where the two walls are
maintained at the same zeta potentials and in the literature, many such works have
been documented. Our present results can then be compared with the earlier works
in the case of the symmetric wall zeta potential. Table 1 compares the friction factor
results of the present work with that of Chen [15]. We can see that the comparison is
very good. Table 2 compares the Nusselt number results from the literature [15] with
the prediction from the present work. Again, we see that the results are consistent
with the literature values.
Figure 2 shows the variation of the friction factor at the bottom wall with the wall
zeta potential ratio at various values of K. Asymmetric wall zeta potentials (walls
having zeta potentials of opposite sign) increase the friction factor because of the
reverse flow. Also, the friction factor increases with an increase in K value since
an increase in the value of K leads to a decrease in the EDL thickness and large
Flow and Heat Transfer Characteristics in a Microchannel … 735
Table 1 Comparison of the friction factor results of the present work with that of Chen [15]
Pure electrokinetic flow (A Pure pressure-driven flow
0) (A 3)
Chen [15] 88.8889 24
Present work 88.89 24
Table 2 Comparison of the Nusselt number results from the literature [15] with the prediction from
the present work
Pure electrokinetic flow (A Pure electrokinetic flow (A
0, K 1000) 0, K 10)
Chen [15] 12 11.09969
Present work 11.988 11.0997
velocity gradients close to the wall. Large velocity gradients enhance the friction
factor. These conclusions are also valid in the case of Fig. 3. The scaled relation in
Eq. 14 shows that the right side of these equations is independent of A. This then
means that the variation of the left-hand side of these equations is simply a function
of K and ξr only, and independent of the mechanism generating the flow. This is a
very simple powerful result as we can easily calculate the right-hand side for one flow
(EKF, PDF, or both) and predict the friction factor for other cases. Figure 4 shows
the approximation in Eq. 15 in comparison to the exact Eq. 14. The approximation
is excellent for this value of K for all ξr . The error involved is less than 1% for this
value of K and will reduce further for larger values of K.
Figures 5 (purely electrokinetic flow) and 6 (electrokinetic flow superimposed
with a pressure-driven flow) show the Nusselt number variation with K for various
values of ξr . It is seen that the Nusselt numbers are more for asymmetric channels
(ξr values both positive and negative) as compared to symmetric channels. This is
because asymmetry results in slower velocities (for ξr positive) and reverse flows (for
736 A. Sailaja et al.
ξr negative) both leading to an increase in the temperature of the fluid, thus increasing
the bulk temperature and a consequent fall in Nusselt numbers. The Nusselt number
increases with an increase in K because the EDL thickness decreases with an increase
in K, and thus resulting in a large thermal gradient. Further increase in K does not
lead to an increase in Nusselt number because the flow approaches slug flow and
the temperature gradient remains constant further. Comparing Fig. 4 with 5, it can
be seen that any pressure drop superimposed on the electrokinetic flow decreases
Flow and Heat Transfer Characteristics in a Microchannel … 737
the Nusselt number as the temperature gradient at the wall decreases. This can be
easily understood from the fact that a purely electrokinetic flow is slug-like and any
pressure-imposed flow is parabolic, and thus leads to a smaller gradient in temperature
at the wall as we move from a pure electrokinetic flow to a pressure superimposed
flow.
4 Conclusions
The present work has analyzed the pressure drop assisted electrokinetic flow in a
microchannel with asymmetric wall zeta potentials. Both walls may have wall zeta
potentials of the same sign or may have opposite signs. The Debye–Huckel approx-
imation has enabled to solve the equations analytically and closed-form solutions
have been obtained. The solutions clearly depict the effect of various parameters on
738 A. Sailaja et al.
the friction factor and Nusselt numbers. Friction factor increases with asymmetry in
the wall zeta potentials and walls with opposite signs are having larger friction factors
than walls with similar signs. The analytical solution obtained for the friction factor
showed that the scaled friction factor is independent of the mechanism generating the
flow. Nusselt numbers also show a similar trend with wall zeta potentials. Nusselt
numbers for pure electrokinetic flows are larger as compared to Nusselt numbers
with pressure-assisted electrokinetic flows.
References
1. Masliyah JH, Bhattacharjee S (2006) Electrokinetic and colloid transport phenomena. Wiley
Interscience, New Jersey, USA
2. Stone HA, Stroock AD, Adjari A (2004) Engineering flows in small devices: micro fluidics
towards a lab on a chip. Annu Rev Fluid Mech 36:381–411
3. Hadigol Mohammad, Nosrati Reza, Nourbakhsh Ahmad, Raisee Mehrdad (2011) Numeri-
cal study of electroosmotic micromixing of non-Newtonian fluids. J NonNewton Fluid Mech
166:965–971
4. Nayak AK (2014) Analysis of mixing for electroosmotic flow in micro/nano channels with
surface heterogeneous surface potential. Int J Heat Mass Trans 75:135–144
5. Sadeghi A, Amini Y, Yavari H, Saidi MH (2016) Shear-rate-dependent rheology effects on
mass transport and surface reactions in biomicrofluidic devices. AICHE J 61:1912–1924
6. Soong CY, Wang SH (2003) Theoretical analysis of electrokinetic flow and heat transfer in a
microchannel under asymmetric boundary conditions. J Colloid Interf Sci 265:202–213
7. Mukhopadhyay Achintya, Banerjee S, Gupta C (2009) Fully developed hydrodynamic and
thermal transport in combined pressure and electrokinetically driven flow in a microchannel
with asymmetric boundary conditions. Int J Heat Mass Trans 52:2145–2154
8. Wang L, Wu J (2010) Flow behaviour in microchannel made of different materials with wall
slip velocity and electro-viscous effects. Acta Mech Sin 26:73–80
9. Afonso AM, Alves MA, Pinho T (2012) Electro-osmotic flow of viscoelastic fluids in
microchannels under asymmetric zeta potentials. J Eng Math 71:15–30
10. Seok W, Choi W, Joo S, Lim G (2011) Electroosmotic flows of viscoelastic fluids with asym-
metric boundary conditions. J NonNewton Fluid Mech 187–188:1–7
11. Escandon J, Jimenez E, Hernandez C, Bautista O, Mendez F (2015) Transient electroosmotic
flow of Maxwell fluids in slit microchannel with asymmetric zeta potentials. Eur J Mech
B/Fluids 53:180–189
12. Jimenez E, Escandon J, Bautista O, Mendez F (2016) Startup electroosmotic flow of Maxwell
fluids in a rectangular microchannel with high zeta potentials. J NonNewton Fluid Mech
227:17–29
Flow and Heat Transfer Characteristics in a Microchannel … 739
13. Kaushik P, Chakraborty S (2017) Startup electroosmotic flow of a viscoelastic fluid character-
ized by Oldroyd-B model in a rectangular microchannel with symmetric and asymmetric wall
zeta potentials. J NonNewton Fluid Mech 247:41–52
14. Bird RB, Stewart WE, Lightfoot EN (2002) Transport Phenomena, 2nd edn. Wiley, NY, USA
15. Chen (2012) Fully developed thermal transport in combined electroosmotic and pressure driven
flow of a power law fluids in microchannels. Int J Heat Mass Trans 55:2173–2183
SIW-Based Slot Antenna Fed
by Microstrip for 60/79 GHz Applications
Abstract In this paper, a lotus-shaped slot antenna with SIW for dual-band appli-
cations those are 60 and 79 GHz frequencies is introduced and designed with the
help of Rogers substrate (5880) with ε r 2.2 and substrate thickness is 0.381 mm.
This antenna will produce two resonant frequencies that are 60 GHz (covers band-
width of 2.25 GHz, i.e., 58.868–61.122 GHz) and 79 GHz (covers a bandwidth of
3.05 GHz, i.e., 77.475–80.518 GHz). The simulation results of the proposed struc-
ture like reflection coefficient, radiation pattern, gain, radiation efficiency, VSWR,
and surface current are observed.
1 Introduction
The frequency range 30–300 GHz (Millimeter frequency) plays a most important
role in wireless communications, and is attracted due to the development of industry
and academic applications. Some of the fixed unlicensed frequencies of millimeter
frequency are millimeter wireless communication networks (60 GHz) [1], automotive
radar systems (79 GHz) [2], and millimeter imaging (94 GHz).
The 60 and 79 GHz are the unlicensed frequency bands in the millimeter wave
frequency range, 60 GHz is used for millimeter wireless applications, and 79 GHz
is for automotive radar applications and fixed by FCC used to communicate the
unlicensed devices, implemented for high data rate and short-range applications [3].
where
aR is the width of DRW.
c is the speed of light.
The width of SIW is also dependent on s and d, as given in Eq. 2 and this equation
is developed from rectangular waveguide [13, 14].
d2
aR w − (2)
0.95s
Equations (3) and (4) are used to maintain the loss-free radiation [13, 14].
SIW-Based Slot Antenna Fed by Microstrip for 60/79 GHz … 743
λg
d≤ (3)
5
and
s ≤ 2d (4)
The structure of paper is given as follows: Sect. 2 describes about antenna structure,
Sect. 3 includes the result discussion of the proposed antenna, and finally Sect. 4
describes the conclusion.
the copper, and copper height of 0.035 mm is chosen for designing this structure.
The length of the microstrip and the tapered microstrip is chosen as quarter wave-
length and the width of a microstrip is calculated based on the standard equation
of microstrip. The spacing and diameter of holes can be designed with the help of
Eqs. 3 and 4. The parameters used for this design can be described in Table 1.
Figure 3 represents the S11 of the proposed antennas. The first antenna ($1) will
produce two resonant frequencies of 58.6 GHz (covers a range of 1.9 GHz, i.e.,
57.568–59.47 GHz) and 77 GHz (covers a range of 3 GHz, i.e., 75.766–78.595 GHz)
with respect to −10 dB reference line. After introducing the rectangular slot in the
bottom of lotus that is represented in Fig. 2, the resonant frequency is perfectly
shifted to 60 and 79 GHz instead of 58.6 and 75.766 GHz, respectively. The second
antenna produces two resonant frequencies 60 and 79 GHz and also improved little
Fig. 4 VSWR
bit bandwidth compared to first antenna in two resonant frequencies. This proposed
antenna is well suited for millimeter wireless communications (60 GHz) and auto-
motive radar applications (79 GHz). The reflection coefficient values are −30.644 dB
at 60 GHz and −21.793 dB at 79 GHz. Figure 4 dissipates the VSWR for second
antenna (@2). The VSWR values are 1.0681 at 60 GHz and 1.1731 at 79 GHz, and
impedance bandwidth is perfectly matched with respect to VSWR (2:1).
Figure 5 represented the surface current for 60/79 GHz frequencies and in that
figure, left indicated for 60 GHz and right indicated for 79 GHz frequencies, and it
is also observed that the current flow is high at feed, transition between SIW and
tapering and staring of slot, and also observed that loss is less due to the etching of
slot in both sides of edges. The two-dimensional radiation patterns of the proposed
structure are represented in Fig. 6. Figure 6a represents the E-field for two resonant
frequencies, i.e., 60/79 GHz that radiates bidirectionally and Fig. 6b represents the
H-field for two resonant frequencies, i.e., 60/79 GHz that radiates bidirectionally.
Figure 7 describes the efficiency in terms of the proposed structure in between
a frequency range of 60–80 GHz and is observed as 86% at 60 GHz and 79.5% at
79 GHz. Figure 8 indicates the gain over the frequency range of 60–80 GHz and
maximum gains are 6 dBi at 60 GHz and 6.78 dB at 79 GHz.
746 M. Nanda Kumar and T. Shanmuganantham
Fig. 5 Surface current: left side 60 GHz and right side 79 GHz
Fig. 6 2D radiation patterns at 60/79 GHz, right E-field and left H-field
4 Conclusion
In this paper, lotus-shaped SIW slot antenna fed by microstrip was introduced for
dual-band applications, i.e., 60 and 79 GHz. This antenna is prominently used for
two applications, i.e., millimeter wireless applications (60 GHz) and automotive
radar applications (79 GHz). The reflection coefficient, gain, VSWR, and radiation
efficiency value of the proposed antenna is −30.068 dB, 6 dBi, 1.0681, and 86% at
60 GHz and −22.229 dB, 6.79 dBi, 1.1737, and 79.6% at 79 GHz. Furthermore, this
antenna can also extend for another unlicensed application, that is, millimeter wave
imaging.
SIW-Based Slot Antenna Fed by Microstrip for 60/79 GHz … 747
Fig. 8 Gain
References
1. Shrivastava P, Rama Rao T (2015) Performance investigations with ATLSA on 60 GHz radio
link in a narrow hallway environment. Prog Electromag Res 58:69–77
2. Cheng S, Yousef H, Kratz H (2009) 79 GHz slot antennas based on SIW in a flexible printed
circuit board. IEEE Trans Antennas Propag 57(1)
3. Lockie D, Peck D (2009) High data rate millimeter wave radios. IEEE Microw Mag 10(5):75–83
4. Ramesh S, Rama Rao T Planar high gain dielectric loaded exponentially TSA for millimeter
wave wireless communications. In: Wireless press communication. Springer, pp. 3179–3192,
June 2015
748 M. Nanda Kumar and T. Shanmuganantham
5. Li Yujian, Luk Kwai-Man (2014) Low cost high gain and broadband substrate- integrated-
waveguide-fed patch antenna array for 60-GHz band. IEEE Trans Antennas Propag
62(11):5531–5538
6. Xu J, Chen ZN, Qing X (2014) CPW center-fed single-layer SIW slot antenna array for auto-
motive radars. IEEE Trans Antennas Propag 62(9):4528–4536
7. Mukherjee S, Biswas A, Srivastava V (2015) Substrate integrated waveguide cavity-backed
dumbbell-shaped slot antenna for dual-frequency applications. IEEE Antennas Wirel Propag
Lett 14:1314–1317
8. Sun D, Xu J, Jiang S (2015) SIW horn antenna built on thin substrate with improved impedance
matching. Electron Lett 51(16):1233–1235
9. Bozzi M, Perregrini L, Wu K, Arcioni P (2009) Current and future research trends in substrate
integrated waveguide technology. Radioengineering 18(2):201–207
10. Nanda kumar M, Shanmuganantham T (2016) Substrate integrated waveguide cavity backed
bowtie slot antenna for 60 GHz applications. In: IEEE international conference on emerging
technology trends
11. Nanda Kumar M, Shanmuganantham T (2016) Substrate integrated waveguide cavity backed
with U and V shaped slot antenna for 60 GHz applications. In: International conference on
smart engineering materials
12. Nanda Kumar M, Shanmugnantham T Current and future challenges in substrate integrated
waveguide antennas—an overview. In: IEEE international conference on advanced computing,
Feb 2016
13. Nanda Kumar M, Shanmugnantham T (2017) Neptune shaped slot antenna with SIW cavity
for 60 GHz applications. Int J Control Theory Appl 10
14. Nanda Kumar M, Shanmugnantham T (2017) SIW based crown shaped slot antenna for 60 GHz
applications. Int J Control Theory Appl 10
Honey Algorithm to Secure
Steganographic Images
1 Introduction
computer files [4–7]. Files can be of any form and each type of file makes one type of
steganography, for example, image file makes image steganography, and other types
are audio and video steganography.
Data hiding facilitates hazard of applications to link both sets of data in such a way
that the cover media can be refound stated by Shi et al. [8]. M. Goljan, J. Fridrich, and
R. Du introduced another method can be used to unmitigated to lossy image format
such as the JPG [9]. The need for reversible or lossless watermarking methods has
been highlighted to associate information with losslessly set up media or to enable
their authentication technique, which is introduced by De Vleeschouwer et al. [10].
The organization of the paper is given as follows: Section 2 describes the algo-
rithm used for this paper; Sect. 3 summarizes the results; and the conclusion can be
described in Sect. 4 followed by references.
2 Honey Algorithm
Figure 1 represents the flowchart used for embedding and it contains enter the pass-
word, read image, open image, and data hiding.
Step 1: Enter the password
It displays the dialog box for entering the password which is predefined in
the program. Enter the six-digit code if it matches with the predefined code
in the program, and then it moves to the next section, otherwise terminate
the program.
Step 2: Read image
This section describes the reading of input image which is used for embed-
ding the data.
Step 3: Open message
It displays the message which is to be embedded.
Honey Algorithm to Secure Steganographic Images 751
Figure 2 describes the extraction flowchart used in this technique, which contains
enter the secret key, decrypt the secret image, open the message, and data retrieved.
Step 1: Enter the secret key
It displays the dialog box for entering the secret key. If this key matches
with the key used at the embedding process, then it moves to the next step,
i.e., decrypt the secret image or else display the fake data.
752 R. Raja Suruthi and K. Anusudha
The total result analysis is developed based on MATLAB with GUI model. The
dialog box of GUI model used for this method is described in Fig. 3. That contains
two columns, and each column contains six rows. This algorithm can be analyzed
Honey Algorithm to Secure Steganographic Images 753
for four input images as shown in Fig. 4. Figure 5 represents the secret images for
different input images. The PSNR, MSE, correlation coefficient, and fidelity factors
are also analyzed based on the given formulas.
PSNR and MSE
The parameters are analyzed between the two images. Here, we are using two images,
which are input image and secret image. The more the PSNR, the better the quality
of the compressed or reconstructed image. The MSE and PSNR are the two error
metrics used to investigate the quality of an image. The MSE indicates the squared
error in between the original image and compressed image, whereas PSNR indicates
the measure of a peak error. The lower value of MSE indicates that the value of error
is low.
The standard formulas of MSE and PSNR are described in Eqs. 1 and 2.
754 R. Raja Suruthi and K. Anusudha
Image.1 Image. 2
Image.3 Image. 4
M ,N [I1 (m, n) − I2 (m, n)]2
MSE (1)
M ∗N
2
R
PSNR 10log10 (2)
MSE
where I1 and I2 are the input and secret images. R is equal to 255.
Correlation coefficient
m n I 1m,n − I 1 I 2m,n − I 2
CC 2 2
I 1m,n − I 1 I 2m,n − I 2
Image.1 Image. 2
Image.3 Image. 4
Table 1 represents the comparison of different parameters (i.e., PSNR, MSE, CC,
and FF) for different images and it is observed that the PSNR value is high and the
MSE is very low, which means that these two parameters are inversely proportional
parameters, and based on the parameter values we stated that the secured information
can be exchanged through HA. Figure 6 represents the decrypted key for two cases.
756 R. Raja Suruthi and K. Anusudha
4 Conclusion
In effect, the honey algorithm serves up fake data in response to every incorrect
guess of the password or key. This makes it hard to decide when the correct key has
been guessed. Both the encryption and the decryption parts are secured with pass-
words, and the decryption part is again secured with a secret image which drastically
improves the security. Thus, honey algorithm ensures 100% security.
References
8. Shi YQ, Ni Z, Zou D, Liang C (2004) Lossless data hiding: fundamentals, algorithms and
applications. In: IEEE international symposium circuits systems. Vancouver, Canada, pp 33–36
9. Goljan M, Fridrich J, Du R (2001) Distortion-free data embedding. In: Proceedings 4th infor-
mation hiding workshop. Pittsburgh, PA, pp 27–41
10. De Vleeschouwer C, Delaigle JF, Macq B (2001) Circular interpretation on histogram for
reversible watermarking. In: IEEE International multimedia signal processing workshop.
France, pp 345–350
SAC Channel Effects on MIMO Wireless
System Capacity
Abstract In 5G cellular systems, MIMO plays an important role in the radio access
technology and access network topology. MIMO uses an additional space dimension
beyond time and frequency to enhance the service quality by providing diversity gain,
data rate (capacity) by multiplexing gain and coverage and outage with array gain. In
this paper, different multi-antenna system configurations under the impact of spatial
antenna correlation are compared using a capacity performance metric. The results
indicate that the MIMO water-filling algorithm (WFA) method under uncorrelated
scenario gives better capacity compared to a fully correlated scenario. The results
also indicate that by increasing the number of transmitting and receiving antennas,
there is an improvement in the MIMO capacities without boosting the transmitted
power and additional spectral requirement.
1 Introduction
role in its system performance. Therefore, spatial antenna correlation (SAC) among
multiple transmit and receive antennas is essential for the design of propagation
characteristics by system designers [5–11].
The main factor affecting the data rate is transmission bandwidth. Wider trans-
mission bandwidths support higher data rate with a challenge of multipath fading
on the channel. The other way to increase the overall receiver power at receiver to
achieve high data rate is to use multiple antenna systems such as diversity receivers
including SIMO, MISO and MIMO [12]. Ergodic capacity is a maximal rate in which
communication can be achieved based on the channel state information at transmitter
(CSIT) distribution over the fading channel. An ergodic capacity of MIMO channel
is analysed in [4].
The rest of the paper is organized as follows. The MIMO system model with chan-
nel matrix is explained in Sect. 2. Section 3 details the spatial antenna correlation
model with transmitting and receiving correlation matrices. In Sect. 4, the numeri-
cal results for characterizing ergodic capacity with different correlation factors are
obtained. Finally, conclusions are given in Sect. 5.
r k H sk + z k (1)
T
where rk r1k , r2k , . . . , rnkR is the signal received at k time instant, sk
1 2 T
sk , sk , . . . , snkT is the transmitted signal and zk is AWGN with variance and the
antenna j receives superposition of messages transmitted from transmitter antenna i,
multiplied by the channel response and Gaussian noise is added [6, 7].
The nR × nT channel matrix with elements hji is represented as follows:
nT nR
. .
Transmitter . . Receiver
. .
h nR × n T
where hj,i is the complex channel coefficient between the ith antenna at the transmitter
and the jth antenna at receiver side with zero mean circular symmetric and complex
Gaussian (ZMCSCG).
• MIMO channel capacity C M I M O is given by
P · HH∗
CMIMO log2 det I + (3)
σn2
where (.)* denotes the complex conjugate transpose of the corresponding vector or
matrix and I represents the identity matrix.
n
P̄
CM I M O Log2 1 + |λk |2 bits/s/Hz n min (n R , n T ) (4)
k1
σn2
In perfect CSIT, SVD of subchannel matrix is computed for each subcarrier in the
MIMO system and WFA algorithm is used for achieving ergodic capacity [6].
Generally, MIMO channel gains are not always independent and identically dis-
tributed. The SAC is associated with the capacity of the MIMO channel. The chan-
nel matrix of MIMO channel model for Rayleigh flat-fading-like channels when the
transmit antenna nT and receive antenna nR spatial gains are correlated is given as
1 1
H Rr2 Hw Rt2 (6)
where Rr is the correlation matrix resulting in the correlations between receive anten-
nas and Rt is the correlation matrix resulting in the correlations between transmit
762 V. K. Minchula and G. Sasibhushana Rao
antennas and it is assumed that the correlation among transmit antenna arrays is
independent of the correlation between receive antenna arrays (v.v.).
Therefore, now (6) substituted in (3) gives the SAC MIMO capacity as
⎛ ⎛ ⎞⎞
∗ ∗( 2 )
1 1
P · R 2
r Hw Rt Hw Rr
CSAC MIMO log2 ⎝det ⎝I + ⎠⎠ (7)
σn2
P · Hw Hw∗
CSAC MIMO log2 det I + + log2 (det (Rr )) + log2 (det (Rt )) (8)
σn2
Correlation matrices are determined by the power spectrum of channel p(φ), antenna
spacing between antennas (d1 ) and antenna patterns (x1 (φ) and x2 (φ)) as
2 d1
R e−2j λ sin Ø x1 (φ)x2 (φ)p(φ) dφ (9)
0
This section presents the results to illustrate the performance improvement in terms
of the ergodic capacity of 2 × 2 and 3 × 3 MIMO configurations with Monte Carlo
simulations. The Hw channel response entries are generated from a Rayleigh fading
distribution. The concepts detailed under second and third section along with (2–9)
are used for calculating the ergodic capacity of uncorrelated (full rank), correlation
factor 0.5, and fully correlated (rank 1) fading scenarios. All simulations are done in
MATLAB.
Figure 2 shows the ergodic capacity obtained due to different channel conditions
(Perfect CSIT and No CSIT) for different SNR values in dB. At 18 dB of SNR, the
capacities are 14.351 and 14.345 bps/Hz. It is observed that Perfect CSIT does not
help to improve the spectral efficiency at high SNR, and it performs almost equal to
the ergodic capacity of No CSIT.
The ergodic capacities computed for 3 × 3 and 2 × 2 MIMO configurations due to
the WFA algorithm are illustrated in Fig. 3. The results indicate that WFA gives better
ergodic capacity for fully uncorrelated than completely correlated fading scenarios.
For example, the percentages of improvement in ergodic capacity due to the WFA
in uncorrelated scenario compared to the correlated scenario are given below:
i. At 2 dB of SNR, the percentage of improvement in ergodic capacities are 8.13%
and 21.86% for 2 × 2 and 3 × 3 MIMO, respectively.
ii. At 18 dB of SNR, the percentage of improvement in ergodic capacities are
41.75% and 82.19% for 2 × 2 and 3 × 3 MIMO, respectively.
SAC Channel Effects on MIMO Wireless System Capacity 763
14
Ergodic Capacity in bits/s/Hz
12
10
2
0 5 10 15 20
SNR in dB
Fig. 2 The spectral efficiency of 3 × 3 MIMO system with and without CSIT versus SNR in dB
10
0
0 2 4 6 8 10 12 14 16 18 20
SNR in dB
Fig. 3 Ergodic capacity in bits/s/Hz versus SNR in dB with different levels of spatial antenna
correlation factors for WFA algorithm
Table 1 presents the capacities of SAC MIMO systems obtained under different
antenna correlation factors. The 3 × 3 MIMO under fully uncorrelated fading sce-
nario at 18 dB of SNR is giving the spectral efficiency of 15.04 bps/Hz and 2 × 2
MIMO is giving the spectral efficiency of 10.13 bps/Hz, whereas with correlation
factor 0.5, 3 × 3 MIMO spectral efficiency is 13.8 bps/Hz and 2 × 2 MIMO is giving
9.443 bps/Hz.
764 V. K. Minchula and G. Sasibhushana Rao
Table 1 Ergodic capacity with different levels of SAC factors versus SNR in dB
S. no. SNR Ergodic capacity in bit/s/Hz
2×2 3×3
Uncorrelated Correlation Fully Uncorrelated Correlation Fully
(p = 0) factor (p = correlated (p (p = 0) factor (p = correlated (p
0.5) = 1) 0.5) = 1)
1 2 2.672 2.586 2.471 4.007 3.797 3.288
2 4 3.292 3.171 2.974 4.999 4.593 3.93
3 6 4.037 3.806 3.414 6.092 5.707 4.57
4 8 4.864 4.541 3.991 7.296 6.802 5.147
5 10 5.704 5.38 4.518 8.617 8.036 5.753
6 12 6.694 6.267 5.45 10.04 9.329 6.456
7 14 7.746 7.266 5.942 11.66 10.66 7.065
8 16 8.897 8.36 6.533 13.28 12.11 7.565
9 18 10.13 9.443 7.146 15.04 13.8 8.255
10 20 11.22 10.64 7.808 16.77 15.44 8.937
In the worst-case condition with the fully correlated scenario at 20 dB SNR, the
spectral efficiencies of 3 × 3 MIMO system decrease by 46.7% and 2 × 2 MIMO
system decreases by 30.45%.
5 Conclusions
This paper estimates the ergodic capacity of multiple input multiple output antenna
systems for different configurations under perfect and no channel state information.
The MIMO WFA method under uncorrelated scenario gives better capacity (11.22
bps/Hz) compared to fully correlated scenario capacity (7.808 bps/Hz). Further, it
is found that an increase in antenna configurations also increases the capacity. For
example, the capacity of 3 × 3 MIMO configuration (16.77 bps/Hz) is observed
higher than that of 2 × 2 MIMO configuration (11.22 bps/Hz) system. It is found
that the MIMO system with different configurations provides the higher percentage
of improvement in the channel capacities with correlation factor p values ranging
from the best-case (p = 0) to worst-case (p = 1) conditions. So, the spatial antenna
correlations among both the transmit antennas and the receive antennas can increase
the capacity at high SNR or for a large number of MIMO configurations. The SAC
parameters discussed in this paper can be implemented in MIMO systems for achiev-
ing performance in LTE-A systems.
Acknowledgements The work undertaken in this paper is supported by Ministry of Social Justice
and Empowerment, Govt. of India, New Delhi, under UGC NFOBC Fellowship Vide Sanction letter
no. F./2016-17/NFO-2016-17-OBC-AND-26194/(SA-III/Website) dated February 2016.
SAC Channel Effects on MIMO Wireless System Capacity 765
References
1. Sasibhushana Rao G (2013) Mobile cellular communication. Pearson Education, New Delhi
2. Van Zelst A (2000) Space division multiplexing algorithms. In: 10th mediterranean electrotech-
nical conference (MELECON) 2000, vol 3, pp 1218–1221
3. Telatar E (1999) Capacity of multi-antenna Gaussian channels. Eur Trans Telecommun
10(6):585–595
4. Foschini GJ, Gans MJ (1998) On limits of wireless communications in a fading environment
when using multiple antennas. Wirel Pers Commun 6(3):311–335
5. Vaughan RG, Anderson JB (1987) Antenna diversity in mobile communications. IEEE Trans
Antennas Propag 49:954–960
6. Goldsmith A (2005) Wireless communications. Cambridge University Press, pp 299–310
7. Schumacher L et al (1999) MIMO channel characterization. Metra Deliverable D2, IST-1999-
11729/AAU-WP2-D2-V1.1.doc
8. Winters J (1987) On the capacity of radio communication systems with diversity in a Rayleigh
fading environment. IEEE J Sel Areas Commun 5:871–878
9. Durgin GD, Rappaport TS (1999) Effects of multipath angular spread on the spatial cross-
correlation of received voltage envelopes. In: 49th IEEE vehicular technology conference
(VTC), vol 2, pp 996–1000
10. Loyka SK (2001) Channel capacity of MIMO architecture using the exponential correlation
matrix. IEEE Commun Lett 5(9):369–371
11. Paulraj A, Nabar R, Gore D (2003) Introduction to space-time wireless communication. Cam-
bridge University Press, pp 66–153
12. Shiu D-S, Foschini GJ, Gans MJ, Kahn JM (2000) Fading correlation and its effect on the
capacity of multielement antenna systems. IEEE Trans Commun 48(3):502–513
Sagnac and Orbital Eccentricity-Based
Pseudo-range Modeling for GPS
Navigation
1 Introduction
GPS positing services provide precise and reliable navigation anywhere on or above
the surface of the earth. To achieve precise navigation, highly sensitive GPS receivers
are designed. Development of such highly accurate satellite measurement techniques
makes the receiver vulnerable to relativistic effects. The relativistic effects on GPS are
because of the relative motion of the GPS satellite and the receiver (special relativity)
[1] and due to different gravitational potentials acting on the satellite in space and
the receiver on the earth (general relativity) [2, 3]. Though there are a number of
relativistic effects influencing the satellite navigation, the orbital eccentricity and
Sagnac effect are predominant [4–6]. In the following sections, the effects due to
orbital eccentricity and Sagnac effect are explained, and the impact of these errors
on signal emission time, pseudo-range, and satellite position are also presented.
Precise synchronization of atomic time onboard the satellite with GPS time is vital to
obtain precise receiver position. The timing information in the satellite signal allows
the receiver to calculate the time at which the signal has been emitted (SET) from
the satellite antenna. This signal emission time is biased by error due to atomic clock
aging, orbital eccentricity, and Sagnac effect.
The following is the signal transmitted by ith satellite:
i
SL1 (tSET ) 2PC/A (X i (tSET ) ⊕ Di (tSET )) cos(2π fL1 tSET + ϕL1 )
+ 2PY (Y i (tSET ) ⊕ Di (tSET )) sin(2π fL1 tSET + ϕL1 ) (1)
where
fL1 is the GPS L1 frequency (Hz)
ϕ are phase noise and oscillator drift component (rad)
PC/A is coarse/acquisition code signal power (watts)
PY is encrypted precision code signal power (watts)
tSET is the satellite clock error corrected Signal Emission Time (SET) (s)
The satellite transmitted signal has the time information and the navigation data.
But the time parameter tSET is biased by orbital eccentricity and the Sagnac effect.
Hence, these error needs to be precisely estimated for each epoch of observation data
and for all the satellites visible in each epoch in order to get accurate pseudo-range
and satellite position.
The satellite clock error correction would be sufficient to give the accurate SET and
further, the precise receiver position if the satellite orbits are perfectly circular and
provided there are no major relativistic effects [7]. The satellite orbits are elliptical
Sagnac and Orbital Eccentricity-Based Pseudo-range Modeling … 769
and the eccentricity of the orbit is very small (e < 0.02) [8], but it still affects the
one-way time transfer from the satellite to the receiver. The computations involved
in orbital eccentricity correction are as given below
The mean motion of the satellite is calculated as
n μ/a3 + Δn (2)
where
Mek is mean anomaly at signal emission time for ekth epoch (rad)
Mo is mean anomaly at reference time (rad)
tek is the time difference between SET and the time of epoch (s)
The eccentric anomaly is calculated from Kepler’s equation through the iterative
method with an initial assumption as E0 Mek
When the satellite is at lowest altitude and when it is approaching earth’s polar
region, the velocity of the satellite increases and the gravitational force experienced
by the satellite is also lower, hence the onboard satellite clocks run slower [10].
The Sagnac effect is the angular motion of the location of the GPS receiver due to
the earth’s rotation around its axis during signal propagation. The GPS receiver posi-
tion is determined in ECEF coordinate frame and the satellite position is determined
in ECI coordinate frame [11]. Due to the motion of the earth, the receiver on the
surface of the earth will experience a finite rotation with respect to the nonrotating
ECI coordinate frame. If the receiver experiences an angular motion away from the
direction of satellite orbital path, the propagation time will increase and vice versa
[12].
The change in GPS receiver location during signal propagation time due to Sagnac
effect is denoted by “δ ROT ”.The Sagnac effect is corrected by transforming the ith
satellite coordinates Xsi xsi , ysi , zsi which arecalculated at the signal emission
time to the satellite coordinates XRi xRi , yRi , zRi at signal reception time.
⎡ ⎤
cos(ωE × δROT ) sin(ωE × δROT ) 0
⎢ ⎥
MROT (ωE × δROT ) ⎣ − sin(ωE × δROT ) cos(ωE × δROT ) 0⎦ (9)
0 0 1
Correcting the satellite position and pseudo-range for Sagnac effect will result in
a more accurate receiver position. The impact on pseudo-range is in the order of
tens of meters [13]. The Sagnac effect corrected satellite positions are used for the
receiver position estimation and the pseudo-ranges are calculated using the Sagnac
effect corrected satellite position.
Sagnac and Orbital Eccentricity-Based Pseudo-range Modeling … 771
The error due to orbital eccentricity and Sagnac effect are estimated for a geograph-
ical location in the Indian subcontinent for typical ephemerides collected on April, 6
2016 from the dual frequency GPS receiver located at Department of Electronics and
Communications Engineering, Andhra University College of Engineering, Visakha-
patnam (latitude 17.73o N/longitude 83.319o E), India. For the same geographical
location, similar results are obtained for ephemerides data of other days of the year
and also for the data collected in different years. The data is in RINEX 2.10 format.
On April 6, 2016, the ephemerides data is received from 02:00:00 to 24:00:00 h.
During this observation period of 22 h, out of 32 satellites, minimum of nine satel-
lites were visible in each epoch. Though the errors are computed and analyzed for all
the visible satellites in each epoch; in this paper, the error due to orbital eccentricity
and Sagnac effect and its impact on the SET, pseudo-range, and position of Satellite
Vehicle Pseudorandom Noise 17 (SVPRN 17) are presented.
Table 1 shows that for SVPRN 17, the eccentricity varied from 0.00617083 to
0.00617060 during the entire observation period. This resulted in the error in pseudo-
range of SVPRN 17 which varies from −4.2 to 4 m.
Table 2 details the signal propagation time of the satellite and the earth’s angle
of rotation during propagation time. The table also shows the error in pseudo-range
due to Sagnac effect. From the table, it is observed that the signal propagation time is
higher due to the lower elevation angle and when the satellite approaches, the zenith
of the propagation time is lower. The Sagnac effect on pseudo-range is in the order
of tens of meters and at 22 h, the least effect on pseudo-range of ≈ 4 m is observed.
Figure 1 shows the elevation angle of the satellite and the earth’s rotation during
the signal propagation time. From the figure, it is observed that for zenith the angle of
earth’s rotation is less, as the signal transit time is short. For SVPRN 17, the highest
angle of rotation is observed at the 5o elevation. During the entire observation period
from 16:05:00 to 23:59:45 h, the elevation angle varied from 5° to 59°.
-6
x 10 Earth's Rotation during signal propagation of SVPRN 17
6.5 60
Satellite
Earth's Rotation [rad]
Elevation at
Elevation [degrees]
SET
6 40
5.5 20
Earth's rotation
during signal
propagation
5 0
16 17 18 19 20 21 22 23 24
Time of epoch (15 seconds interval) [Hrs]
From Fig. 2, it is observed that the pseudo-range error due to Sagnac effect devi-
ated ±9.55 m from expected error. The minimum error of −10.21 m is observed
at 22:34:30 h, and this means the observed pseudo-range is 10.21 m shorter due to
Sagnac effect than the actual range and 33 ns of error in measured signal propagation
time.
Table 3 shows the details of the satellite positions corrected for the Sagnac effect.
The corrected satellite positions are compared with the uncorrected position and the
errors in the respective coordinate are also shown in the table. In order to correct
the Sagnac effect, the satellite position estimated from broadcast ephemerides are
rotated along the z-axis, hence the z-axis position coordinate remains the same as that
of the uncorrected position. This implies that zero error in Sagnac effect corrected
z-coordinate of the satellite position.
Sagnac and Orbital Eccentricity-Based Pseudo-range Modeling … 773
Minimum error
30 : -10.21 m
Error in pseudorange [m]
20
10
Maximum error :
39.4 m
0
Mean error :19.34 m
Stansard Deviation : 9.55 m
-10
16 17 18 19 20 21 22 23 24
Time of epoch (15 seconds interval) [Hrs]
Figure 3 shows the change in x- and y-coordinate errors of the satellite posi-
tion. From the figure, it is observed that error in x-coordinate is higher than the
y-coordinate error. The x-coordinate error deviated ±76 m from its expected value.
The y-coordinate error deviated by ±12 m from its expected value. The z-coordinate
of the position remains unchanged.
774 B. Bidikar et al.
Y-
axis
Error in coordinates [m] 100
50
Z-
0 axis
X - axis
-50
-100
16 17 18 19 20 21 22 23 24
4 Conclusions
In this paper, the algorithm to model the pseudo-range and correct the satellite posi-
tion for orbital eccentricity and Sagnac effect are presented. The implementation of
the algorithm to estimate errors due to orbital eccentricity and Sagnac effect proves
that inaccuracy in SET, pseudo-range, and satellite position needs to be modeled in
order to obtain the precise receiver position. Due to orbital eccentricity, the error
in SET of SVPRN 17 varies from ≈−14.13 to 14.13 ns. The pseudo-range error of
SVPRN 17 due to Sagnac effect deviated ±9.55 m from the expected error. The max-
imum pseudo-range error of 39.4 m is observed on SVPRN 17 which implies that the
observed pseudo-range is 39.4 m longer due to Sagnac effect than the actual range
and 130 ns of error in measured signal propagation time. Due to Sagnac effect, the
deviations from mean in x- and y-coordinate error are ±76 m and ±12 m, respectively.
The algorithm given in this paper can be implemented to obtain precise navigation
solution in civil aviation and category I/II Precision Approach (PA) aircraft’s landing.
References
1. Einstein A (1916) Die Grundlagen der allgemeinen Relativiaetstheorie. Annalen der Physik,
Folge IV, Band 49, Nr. 7. Berlin
2. Einstein A (1905) Zur Electrodynamik bewegter Körper, Annalen der Physik. Berlin, pp 17
3. Winterberg F (1956) Relativistische Zeitdiiatation eines künstlichen Satelliten. Astronautica
Acta II 25:08–10
4. Pireaux S Relativity and space geodesy. IAU General Assembly, Prague, 21st Aug 2006
5. Wellenhof BH, Lichteneger H, Collins J (2001) Global positioning system. theory and practice.
Springer Wien, New York
Sagnac and Orbital Eccentricity-Based Pseudo-range Modeling … 775
D. B. V. Jagannadham (B)
Gayatri Vidya Parishad College of Engineering (Autonomous), Madhurawada,
Visakhapatnam, India
e-mail: dbvjagan@gmail.com
D. V. Sai Narayana
SENSE Division, VIT University, Vellore, India
e-mail: dvsnarayana987@gmail.com
D. Koteswar
Department of ETC, IIEST Shibpur, Howrah, India
e-mail: doddikoteshhwor@gmail.com
S. Tripathi
Wipro Technologies, Bangalore, India
e-mail: adhilaxmidoddi@gmail.com
1 Introduction
Nowadays, in India, having own transportation is a common thing. But those who
are not able to afford their own vehicle depends on public transportation like bus,
train, and cab which provides the convenient alternative to reach another destination
from time to time. In addition, public transportation provides transportation service
at low-cost, hence the best choice for the low-income family for traveling to another
destination. As buses, trains offer one of the cheapest modes of transport, more than
70% of middle- and low-class people in India rely on it [1]. Mostly, the people in
India prefer to travel longer distances in public transportation. But sometimes, the
public transport has adverse effects on the passengers. Such as:
Passengers suffer in the public transportations where the destination is reached
in odd timings, especially at midnights or early mornings. There is a possibility
that passengers traveling over long distances may miss their destination station at
midnights or due to lack of information about the place they are traveling to and
reach the unknown city or village especially [2].
One more problem in long-distance public transportation is passenger safety.
Nowadays, it is common to find the news of thefts, violence, and woman harassment
in public transport in the newspapers. Moreover, at odd times of day and in isolated
places, public transport may be unreliable to provide safety as public transport lack
security or information about the safety issue does not reach the security on the
transport system
Another problem in long-distance public transportation is passenger health issues.
Generally, in India, we have to reserve the transportation mode, some days prior to
the day of the journey. On the day of journey, the passenger’s health condition may
not be well or the passenger may face health problem during the journey, especially
if the passenger is more in age. Lastly, Indian public transports never reach their
destination in time. So, onboarding passengers even are unable to trace their location
and the distance they have to cover [3, 4].
Figure 1 shows the condition of passenger traveling over along distance.
To avoid the above mentioned problems in Fig. 1, we need an onboard system that
establishes a communication link between the passengers and the onboard transport
staff for letting the transport staff know about the problems faced by the boarded
passengers and reminds the passengers about the destination prior it is reached.
Let us call the about system as “Passenger Security Monitoring and Destination
Alert System” (PSDMS). This system could bring a solution to overcome the above
problems.
Passenger Safety Monitoring and Destination Alert System … 779
There are four major parts in the PSDMS: controller, GPS receiver, GSM module, and
a Wi-Fi module. The controller controls the operation of GPS receiver, GSM module,
and Wi-Fi module with appropriate control signals. The GPS receiver extracts the
positional information of the vehicle on which the PSDMS is mounted. The GSM
module establishes a communication link between passengers and the onboard trans-
port staff. In the absence of GSM, the Wi-Fi module establishes a communication
link between passengers and transport staff.
1.3 Applications
The PSDMS can be used in any mode of the transport system to ensure the passen-
ger’s safety and to alert the passenger before the destination is reached. In case of
780 D. B. V. Jagannadham et al.
railways where passengers are moving for longer distances, the PSDMS will help the
passengers to communicate with the onboard railway staff. In case of road transports
like buses or cabs, the PSDMS will help the passengers to establish communication
link to the central transport hub or to the service provider directly.
The following steps briefly demonstrate the working of the PSDM system.
STEP 1: Passenger’s information regarding his destination and mobile number is
taken during the time of ticket booking.
STEP 2: The GPS receiver present in the system tracks the vehicle/train position
continuously and sends to the microcontroller.
STEP 3: The microcontroller tracks the location, checks the next possible destina-
tion for the vehicle/train, and the passenger’s list going to that destination.
It triggers the GSM modem to send “Alert SMS” to those passengers such
that they receive the alert before their destination is reached.
STEP 4: On receiving SMS, the microcontroller checks whether the mobile number
from which the message was received is of registered passengers or not.
If it is a registered passenger, then it checks the content of the message.
If the message has “HELP 1”, it sends a message to TTE and ATTENDANT that
passenger of berth XYZ has MEDICAL EMERGENCY.
If the message has “HELP 2”, it forwards the message to TTE and SECURITY
that passenger of berth XYZ has SECURITY ISSUE.
If the message has “HELP 3”, it sends back a message to a passenger with the
proper position of the train.
See Fig. 3.
See Fig. 4.
2 Results
The GPS receiver receives the GPS data continuously in different formats like
GPGGA, GPGLL, GPGSA, GPRMC, etc. The µC is programmed to receive the
GPS data in GPRMC Format. First, the µC checks the validity of the received GPS
782 D. B. V. Jagannadham et al.
data and then extracts the latitude and longitude. The abovementioned process is
shown (Fig. 5).
After the latitude and longitude are extracted, the µC converts it to a decimal
format that can be used in mathematical equations for calculating the distance from
the nearby stations (Figs. 6 and 7).
Passenger Safety Monitoring and Destination Alert System … 783
berth details (x) of the passenger with the registered mobile number and immediately
sends a message to the TTE stating that medical emergency at berth number x. If the
message contains “HELP 2”, then µC checks the berth details (x) of the passenger
with the registered mobile number and immediately sends a message to the TTE
stating that security issue at berth number x. If the message contains “HELP 3”, then
µC checks the latitude and longitude of the present position, checks the distance
between the present position to that of approaching station and the last station and
informs the registered passenger that he is at a distance “Y” from next station or last
station whichever is closer (Figs. 8 and 9).
The PSDMS checks for the vehicle’s positional information continuously. Accord-
ing to the present position, it checks the next station approaching. It calculates the
Passenger Safety Monitoring and Destination Alert System … 785
Fig. 9 Listing passengers for next station and sending destination alert
distance between the present position and the next station. Once the distance between
the present position and the next approaching station becomes less than 30 km, the
PSDMS lists out the passengers to get down in the next station. It takes out the
passengers’ mobile numbers and sends them destination alert through SMS. This
process runs in a lop till all the passengers to get down in the next station are alerted.
Once a passenger is alerted about his approaching destination, the information stored
about the passenger is deleted (Fig. 10).
The latitude and longitude values are then extracted from the GPS data frames
received. As the testing was performed at Avantel Ltd., Visakhapatnam, the above
figure gives the positional information of Avantel Ltd.
786 D. B. V. Jagannadham et al.
Fig. 12 Passenger’s
information for health
problem
passenger’s berth details (as shown in Fig. 13). The passenger can tell about his/her
security-related issue s by simply sending an SMS with content HELP 2 to the
PSDMS (as shown in Fig. 14). The PSDMS, in turn, sends the information to the
TTE along with the passenger’s berth details (as shown in Fig. 15).
The PSDMS also helps the passenger to know about their current location on
request from the passenger. As trains in India generally do not follow the schedule,
the passenger faces difficulty to know about their location information.
The passenger can request for his/her location information by simply sending an
SMS with content HELP 3 to the PSDMS (as shown in Fig. 16). The PSDMS gathers
the positional information through GPS data, gets the location of the nearby station
calculates the distance between current position and the nearby statio,n and sends
back the information to the passenger as shown in Fig. 16.
During the long distance journeys, we may find the problem of not getting GSM
network throughout. So, it can cause problems for the PSDMS to establish a link
between the passengers and onboard railway staff. In order to avoid such inconve-
nience, the PSDM system is provided with a Wi-Fi module. The passengers can be
able to communicate with the transport staff over Wi-Fi via PSDMS in absence of a
proper GSM network.
788 D. B. V. Jagannadham et al.
Fig. 14 Passenger’s
information receiving about
problem with passenger’s
security
For this, the passengers have to install Term UDP (Android App) on their smart-
phones. Term UDP establishes a communication link between different devices.
Passenger Safety Monitoring and Destination Alert System … 789
Fig. 17 Passenger’s information for health problem over Wi-Fi and the reply from PSDM system
Once the communication link between passenger’s mobile and PSDMS is estab-
lished over Wi-Fi, the messages can be sent and received by both the devices.
The passengers have to send the same content of messages over Wi-Fi as men-
tioned earlier to communicate about the problems faced during the journey to the
PSDMS which will, in turn, communicate to the TTE who will also be connected to
PSDMS over Wi-Fi.
Figures 17, 18, 19, 20, and 21 show the assistance provided by PSDMS to pas-
sengers during the journey in absence of the GSM network.
Passenger Safety Monitoring and Destination Alert System … 791
Fig. 19 Passenger’s information for berth number over Wi-Fi to TTE berth number security issue
over Wi-Fi
Fig. 20 Passenger’s security issue for berth number over Wi-Fi to TTE
Missing the destination can now be overcome and safety of the passengers can be
ensured by the implementation of PSDM System within the vehicle. It uses the com-
bined technology of GPS (for tracking the position of the train), GSM technology
(for SMS), Wi-Fi, and programming skills to interface various circuits to a micro-
controller.
After studying this paper, one can able to build such a model for practical applica-
tions with some slight changes if such application is implemented in real time, then
it will be greatly helpful to the public. One can even extend this application with
the inclusion of other technologies like GPRS to get the passengers details directly
792 D. B. V. Jagannadham et al.
Fig. 21 Location details enquiry with reply from PSDMS over Wi-Fi
and to inform passenger’s health and security issues to the next approaching station
prior to the arrival of the vehicle such that protective measures can be taken in case
of emergencies.
This paper can be further extended with help of some Android applications to help
passengers and transport staff with smart pantry in trains where food and timings of
food delivery can be decided by the passenger according to his/her wish during the
journey.
References
Abstract In the proposed work, JLTMCSG Tunnel FET has been analyzed and it is
shown that the device can efficiently suppress DIBL and get better carrier transport
efficiency. We study the linearity of TMG and DMG based on linearity parameters
such as transconductance gm and 1-dB compression point. We also investigate the
radio frequency performance for JLTMCSG Tunnel FET for various parameters such
as cutoff frequency fT and analog performance such as C-V, DIBL, and threshold
voltage roll-off.
1 Introduction
To improve the performance of the device, there will be continuous scaling of the
device dimension in the nanometer range. The progression of multiple gate structures
such as double gate, tri-gate, and cylindrical surrounding gate [1, 2] has been inves-
tigated to decrease the short-channel effects and to increase the gate controllability
throughout the channel. Cylindrical surrounding gate tunnel FET is a capable candi-
date to meet the challenging limit of CMOS expertise. Due to ultra-sharp source/drain
junction, it gives a rigorous challenge to doping mechanism and thermal accounts
[3] for which new junctionless (JL) transistor has been proposed as a device to attend
these challenges [4]. This device has uniform doping concentration across the chan-
nel. As junctionless transistor operates in bulk conduction mechanism, the channel
of the device should be heavily doped for which conductivity will increase. As the
doping concentration of the channel increases, the carrier mobility decreases, which
leads to the ruin of carrier transfer effectiveness [5, 6].
The triple-material-gate (TMG) structure has three gates made of materials M1, M2,
and M3 of lengths L1, L2, and L3, respectively. Three metal gates having different
work functions are fused together sideways. The work function of M1 is higher than
M2 and M3 (M1 > M2 > M3 ). Because of the difference in work functions, the
potential has been changed at the junctions of the metals, which results in increased
electric field, thereby improving the gate transport efficiency.
The 2D structure of the JLTMCSG TFET is shown in Fig. 1 [7–9]. This junction-
less device has uniform doping concentration with a doping density of ND 1 × 1018
cm−3 . The oxide thickness and the diameter of the silicon body are tox 2 nm and tsi
10 nm, respectively. The gate materials’ work functions from source to drain are
FM1 4.8 eV (Au), FM2 4.6 eV (W), and FM3 4.4 eV (Ti). The Gummel method
and CONSOB as physical models have been used for the simulation purpose [10].
From Fig. 2, it is shown that JLTMCSG TFET with tox 2 nm has less current than
with oxide tox 6 nm. This is due to the fact that in the subthreshold region, with
low oxide thickness, the electrons are easily run down under the same bias foremost
to smaller drain current.
A Comparative Study of Junctionless Triple-Material … 795
Fig. 2 Variation of drain current ID as a function of gate-to-source voltage VGS JLCSG TFET with
different tox with parameters L1 30 nm, L2 60 nm, L3 90 nm, rsi 10 nm, and Vds 0.5 V
Figure 3 shows the variation of drain current for JLTMCSG TFET. It is shown
that with a decrease in silicon radius, the drain current will decrease. Nevertheless,
when channel radius is less than 5 nm, there will be an effect called quantum effect
which changes the device characteristics and this becomes very important. However,
in this paper, we have taken channel radius greater than 5 nm to neglect the quantum
effects [11].
796 S. M. Biswal et al.
Fig. 3 Variation of drain current versus Vgs for JLTMCSG TFET with changed silicon radius with
L1 30 nm, L2 60 nm, L3 90 nm, rsi 10 nm, tox 2 nm, and Vds 0.5 V
Fig. 4 Variation of drain current versus Vgs for JLTMCSG TFET with different L1, L2, and L3
ratios with rsi 10 nm, tox 2 nm, and Vds 0.5 V
Figure 4 shows that with a decrease in the ratio of L1, larger will be the drain
current. Because of a small length of M1, the minimum central potential will increase,
which makes the potential barrier of the channel to decrease when L1 is small.
Therefore, JLTMCSG TFET with a small ratio of L1 has larger drain current.
A Comparative Study of Junctionless Triple-Material … 797
Fig. 5 DIBL as a function of the channel length of TMJLSRG TFET for two different radii
R 10 and 20 nm with tOX 2 nm
DIBL is defined as
VTh (VTh1 − VTh2 )
DIBL , (1)
VDS (VDS1 − VDS2 )
where VTh1 and VTh2 are threshold voltages extracted at drain bias VDS1 0.1 V and
VDS2 1.0 V. Figure 5 shows the deviation of DIBL with different channel lengths for
R 10 nm and R 20 nm. It is marked from Fig. 5 that there will be an improvement
in threshold voltage and DIBL performance with a reduction in channel radius.
Figure 6 shows the Vth roll-off of TMJLCSG TFET for different channel radii. It
is clear from Fig. 6 that scaling in channel radius results in a decrease in Vth roll-off.
Transconductance acts a key role for the design of analog circuits, bandwidth,
noise performance, and offset. Figure 7 shows that for JLDMCSG TFET, transcon-
ductance will increase with an increase in the gate-to-source voltage, Vgs.
3.2 RF Analysis
Figure 8 shows a change in cutoff frequency with change in the gate-to-source voltage
for TMJLCSG and DMJLCSG TFETs. It can be defined as
gm
fT . (2)
2πCgg
798 S. M. Biswal et al.
Fig. 6 Vth roll-off versus different channel lengths for JLTMCSG TFET at different channel radii
with rsi 10 nm, tox 2 nm, L1 30 nm, L2 60 nm, L3 90 nm, and Vds 0.5 V
Fig. 7 Variation of transconductance (gm ) with variation of (Vgs ) for JLTMCSG and JLDMCSG
TFETs with tox 2 nm, L1 30 nm, L2 60 nm, L3 90 nm, and Vds 0.1 V
Fig. 8 Comparison of cutoff frequency curve of TMJLCSG and DMJLCSG TFETs with device
parameters as tox 2 nm, rsi 10 nm, and VDS 0.1 V
Figure 9 shows the evaluation of the 1-dB compression point for JLTMCSG and
JLDMCSG TFETs.
800 S. M. Biswal et al.
Fig. 9 1-dB compression point comparison between TMJLCSG and DMJLCSG TFETs with device
parameters as tox 2 nm, rsi 10 nm, and Vds 0.1 V
4 Conclusion
In this paper, the device JLTMCSG TFET has been studied in conditions of analog,
RF, and linearity analysis. JLTMCSG TFET can well suppress DIBL and at the
same time advance carrier transportation effectiveness. It is also shown that with a
decrease in oxide thickness and silicon channel, the subthreshold current diminishes.
Linearity performance, 1-dB gain compression point, and transconductance gm are
analyzed with a comparison to JLDMCSG TFET. We also bring to a close that for
high-frequency purpose JLTMCSG TFET has marvelous RF quality.
References
7. Goel K, Saxena M, Gupta M, Gupta RS (2006) Modeling and simulation of a nanoscale three-
region tri-material gate stack (TRIMGAS) MOSFET for improved carrier transport efficiency
and reduced hot-electron effects. IEEE Trans Electron Dev 53:1623–1633
8. Chiang T-K (2009) A new two-dimensional analytical subthreshold behavior model for short-
channel tri-material gate-stack SOI MOSFETs. Microelectron Reliab 49:113–119
9. Chen M-L, Lin W-K, Chen S-F (2009) A new two-dimensional analytical model for nanoscale
symmetrical tri-material gate stack double gate metal oxide semiconductor field effect transis-
tors. Jpn J Appl Phys 48:104503
10. ATLAS: 3D Device Simulator, SILVACO International (2013)
11. Horiguchi S, Tabe M, Kishi K (1993) Quantum-mechanical effects on the threshold voltage of
ultrathin-SOI nMOSFETs. IEEE Electron Dev Lett 14:569–571
12. Gautam R, Saxena M, Gupta RS, Gupta M (2012) Nanoscale cylindrical gate MOSFET analog
performance. Microelectron Reliab 52(6):989–994
13. Ghosh P, Haldar S, Gupta RS, Gupta M (2012) An investigation of linearity performance and
intermodulation distortion of GME CGT MOSFET for RFIC design. IEEE Trans Electron Dev
59(12):3263–3268
14. Kaya S, Ma W (2004) Optimization of RF linearity in DG MOSFET. IEEE Electron Device
Lett 25(5):308–310
Impact of p-GaN Gate Length
on Performance of AlGaN/GaN
Normally-off HEMT Devices
Abstract In this work, we have studied the effect of variation in length of GaN gate
with p-type doping concentration on the DC performances of AlGaN/GaN normally-
off HEMT using 2D Atlas TCAD simulator. A comprehensive simulation is under-
taken on the proposed device to examine different performance parameters such as
drain current, transconductance factor, energy band diagram, and surface potential
with respect to change in p-type GaN gate lengths. The gate lengths are varied from
60 to 90 nm, and it is noticed from the simulation results that with a decrease in gate
length the drain current increases and transconductance increases. A proper opti-
mization of gate length is indispensable to preserve the normally-off mode operation
and at the same time enhancing certain performance parameters.
1 Introduction
The search for materials having high mobility and high breakdown voltage than sil-
icon has focused the attention on III–V compound semiconductors in nanoscale
regime [1, 2]. The AlGaN/GaN HEMTs are quite promising devices for high-
frequency and high-power applications due to wide bandgap GaN (3.4 eV) material
[3, 4]. In a GaN-based HEMT, doping of channel is not required due to the presence
of a strong polarization effect [5] in an AlGaN/GaN heterojunction and due to the
effect of this polarization effect, a two-dimensional electron gas (2DEG) is formed
in the channel, which makes these devices normally-on. Therefore, these devices
are not suitable for power electronics as well as digital applications. Hilt et al. [6]
proposed a methodology to get the normally-off AlGaN/GaN heterostructure using
p-type GaN gate and AlGaN buffer with gate length 1.4 µm and to achieve a thresh-
old voltage more than 1 V. Further, Hilt et al. [7] also demonstrated a better threshold
voltage using p-type GaN gate, AlGaN back barrier, and carbon-doped buffer. In
this paper, we have further studied the effect of varying p-GaN gate lengths on the
performances of AlGaN/GaN normally-off HEMTs to have better understanding and
reliability of this prospective device.
2 Device Structure
The cross-sectional view of p-GaN gate Al0.23 Ga0.77 N/GaN with Al0.05 Ga0.95 N
Buffer HEMT is presented in Fig. 1. The p-GaN gate length of the device is varied
from 60 to 90 nm with AlGaN front barrier of 15 nm width, GaN channel of width
30 nm, and Al0.05 Ga0.95 N buffer is used in the proposed device [8]. The gate is taken
to be a p-type GaN material with a doping concentration of 1 × 1018 . The proposed
device is passivated with Si3 N4 layer over the gate.
From the simulation result, it is apparent that with an increase in the p-GaN gate
length, the depth of the quantum well decreased. It is clearly depicted in Fig. 2 which
shows the comparison of energy band diagram with respect to change in the gate
length of the device. It is found that when the gate length is increased from 60 to
90 nm depth of quantum well decreased which in turn decreases the confinement of
carriers. Hence, it is justified that variation in gate length has a significant role in
reducing the leakage current through the substrate.
Since the depth of the quantum well decreased with increase in gate length, there-
fore, confinement of the carrier also decreased, and hence, the leakage current through
Fig. 2 Comparison of
energy band structure by
varying gate length
806 S. K. Swain et al.
Fig. 3 Simulated surface potential diagram of p-type GaN gate AlGaN/GaN HEMT with p-GaN
gate length variation a 60 nm, b 70 nm, c 80 nm, d 90 nm
the substrate tends to be increased. In Fig. 3, we have shown the simulated surface
potential diagram of p-type GaN gate AlGaN/GaN HEMT with a variation of the
gate length of p-GaN gate. This fig clearly specifies that as the p-GaN gate length
increased from 60 to 90 nm, the surface potential increased which is also depicted in
Fig. 4 which is the variation of surface potential distribution with change in p-GaN
gate length along the channel. This variation of surface potential also affects the
threshold voltage and the switching characteristics. Hence, it confirms that as the
gate length of p-GaN gate is increased the surface potential tends to be reduced. This
lowering of surface potential tends to increase the value of threshold voltage of the
device [10].
In Fig. 5, it is confirmed that transfer characteristic of normally-off AlGaN/GaN
HEMT with a variation of GaN gate length from 60 to 90 nm the drain current
decreases due to a reduction in the deepness of the quantum well as indicated in
Fig. 2. This phenomenon is mainly due to lowering of confinement of carrier as the
gate length of the p-GaN gate is improved. This lead to a decrease in concentration
of 2DEG in the channel which in turn reduces the drain current. From the simulated
result, we observed that a higher drain current of 0.2 A/µm at 1.4 Vgs for 60 nm
gate length p-type GaN gate and a 0.17 A/µm for gate length 70 nm is obtained.
Impact of p-GaN Gate Length on Performance of AlGaN/GaN … 807
Fig. 4 Comparison of
surface potential in the
channel for variation of gate
length
Therefore, we can suggest that lower gate length of p-type GaN gate can be used for
better performance of the device.
The variation in transconductance factor (gm ) for a different gate length of p-GaN
is discussed in Fig. 6. It is noticed that with an increase in gate length from 60 to
90 nm, the transconductance factor decreased from 325 to 245 ms/mm. This trend
is similar to that observed in Fig. 5. As the drain current is lowered with increase
in gate length, hence, it causes a significant decrease in transconductance value.
Therefore, we can conclude that a lower gate length is more desirable for higher
power applications and higher switching speeds.
DIBL, i.e., drain-induced barrier lowering, is considered to be one of the important
SCE parameters which affects the performance of the device in the nanoregime. It is
mathematically given as follows:
808 S. K. Swain et al.
VTh (VTh1 − VTh2 )
DIBL (1)
VDS (VDS1 − VDS2 )
where VTh1 and VTh2 are threshold voltages extracted at drain bias VDS1 0.1 V and
VDS2 1.0 V. In Fig. 7, we have verified the threshold voltage roll-off with respect
to doping concentration for gate length varying from 60 to 90 nm. It is observed
that as the gate length is increased, a significant rise in threshold voltage roll-off
is observed. Thus, we can conclude that a lower gate length of the p-GaN gate is
ideal for higher Ion/Ioff ratio. There will be an improvement in threshold voltage
and DIBL performance with reduce in doping concentrations as well as the higher
gate length of p-GaN as inferred from the figure.
Impact of p-GaN Gate Length on Performance of AlGaN/GaN … 809
5 Conclusion
In this paper, it is concluded that the variation of p-GaN gate length on normally-
off HEMT has significant effects on different performance parameters. Therefore,
proper optimization of gate length must be considered to retain the normally-off
condition and it is also verified that the variation of p-GaN gate length also affects
the DIBL, drain current, and transconductance factors. However, the effect of gate
length on breakdown analysis and other RF and linearity parameters are left for the
future scope.
Acknowledgements The authors would like to thank Silicon Institute of Technology, Patia Hills,
Bhubaneswar 751024, India for their valuable support in carrying out this research work.
References
1. Adak S, Swain SK, Singh A, Pardeshi H, Pati SK, Sarkar CK (2014) Study of
HfAlO/AlGaN/Gan MOS-HEMT with source field plate structure for improved breakdown
voltage. Phys E 64:152–157
2. Del Alamo JA (2011) Nanometre-scale electronics with III–V compound semiconductors.
Nature 479:317–323
3. Theis TN, Solomon PM (2010) In quest of the next switch: prospects for greatly reduced power
dissipation in a successor to the silicon field-effect transistor. Proc IEEE 98(12):2005–2014
4. Mishra U, Parikh P, Wu Y (2002) AlGaN/GaN HEMTs—an overview of device operation and
applications. Proc IEEE 90(6):1022
5. Ibbetson JP, Fini PT, Ness KD, Den Baars SP, Speck JS, Mishra UK (2000) Polarization effects,
surface states, and the source of electrons in AlGaN/GaN heterostructure field effect transistors.
Appl Phys Lett 77(2):250–252
6. Hilt O, Knauer A, Brunner F, Bahat-Treidel E, Würfl J (2010) Normally-off AlGaN/GaN HFET
with p-type GaN gate and AlGaN buffer. In: Proceedings ISPSD, pp 347–350
7. Hilt O, Brunner F, Cho E, Knauer A, Bahat-Treidel E, Würfl J Normally-off high-voltage p-
GaN gate GaN HFET with carbon-doped buffer. In: Proceedings 23rd ISPSD, May 2011, pp
239–242
8. Adak S, Swain S, Rahaman H, Sarkar CK (2006) Effect of doping in p-GaN gate on DC
performances of AlGaN/GaN normally-off scaled HFETs. Solid-state Electron 50(6)
9. Device simulator ATLAS User manual. Silvaco International. Santa Clara, CA, May 2013.
http://www.silvaco.com
10. Gila BP, Ren F et al (2004) Novel insulators for gate dielectrics and surface passivation of
GaN-based electronic devices. Mat Sci Eng R 44:151–184
Analysis of Effect of Concavity in Linear
and Planar Microstrip Patch Antenna
Arrays
1 Introduction
The advantages of microstrip patch antenna arrays are their simple manufacturing,
adaptability, low cost, and small size. The microstrip patch antenna arrays are widely
used in phased array antenna applications such as electronic scanning, beam forming,
and smart antenna systems [1]. The mutual coupling effect must be reduced between
the antenna elements of an array as it affects the overall radiated power and pattern
N. K. Darimireddy (B)
Department of ECE, Lendi IET, JNTUK, Kakinada, Andhra Pradesh, India
e-mail: yojitnaresh@gmail.com
R. Ramana Reddy · S. Mastan Vali
Department of ECE, MVGRCE (A), Vizianagaram, Andhra Pradesh, India
e-mail: profrrreddy@yahoo.co.in
S. Mastan Vali
e-mail: prof_vali@yahoo.com
of the array. Mohanna et al. [2] reported that if the mutual coupling is not considered,
it results in errors in beam forming and a null string. Farahbaksh et al. [3] analyzed
the concavity effect on 2-element rectangular patch array with respect to width and
length, and depth of concavity. Using concave rectangular patches instead of normal
rectangular patches is an effective solution to reduce mutual coupling in a linear and
planar array of microstrip antennas [4, 5]. Yazdi et al. [6] and Yoon et al. [7] reported
mutual coupling reduction in microstrip patch antenna arrays using stacking and
EBG (Electronic Band Gap) methods, respectively. The decrease in mutual coupling
between elements is addressed [8] substantially with respect to the decrease in return
loss compared to rectangular patch array. The effectiveness of the proposed method
was presented by the simulation and experimental results. An array consists of two
rectangular patch antennas, six numbers of parallel metallic strips are presented [9]
for −42 dB mutual coupling at the 5.8 GHz. In this paper, microstrip patch array
antennas with both 4-element concave linear, 4-element and 16-element concave
planar patches are fabricated and tested to verify the simulation results.
Design of concave linear 4-element array is presented in Fig. 1a. The length (L) and
width (W) of the concave patch are 9 mm and 10 mm, respectively, and the height
of the substrate is 1.6 mm. The depth (d) of concavity is 1.25 mm. The length and
the width of the FR4 Epoxy material used are 80 mm and 80 mm, respectively. The
ground plane length and width are 80 mm and 80 mm, respectively. The dimensions
of the proposed concave 4-element linear array are W1 15 mm, W2 30 mm,
W3 1 mm, W4 2 mm, W5 3 mm, L1 7.5 mm, L2 5 mm, and L3 (L1 + d)
mm as designated in Fig. 1a. Rectangular 4-element linear array with (d 0) similar
dimensions are simulated as shown in Fig. 1b.
Fig. 1 Design of a proposed concave linear 4-element array, b rectangular 4-element linear array
Analysis of Effect of Concavity in Linear and Planar Microstrip … 813
Fig. 2 a Simulated S11 of the linear 4-element array for different concavities, b simulated radiation
pattern of the linear 4-element array for concavity d 1.25 mm
The proposed concave 4-element linear array with the full ground is presented. For
various values of concavity depth (d), the effect of mutual coupling is observed and
it is shown in Fig. 2a. For d 0 mm, that is, in the rectangular 4-element linear array,
S11 obtained is −15.72 dB. For a concave 4-element linear array, S11 for concavity
d 1 mm is −16.98 dB, for concavity d 1.25 mm, S11 is −33.18 dB, and for
concavity d 1.5 mm, S11 is −16.37 dB. From the simulation results, it is evident
that as concavity is increasing up to 1.25 mm, the S11 is increasing 111.068% as
a result of a decrease in the mutual coupling. If concavity is increased beyond a
value, the mutual coupling is increased. The radiation pattern of the concave linear
4-element array antenna is shown in Fig. 2b. The peak gain obtained is 9.39 dBi.
The fabricated 4-element concave linear microstrip patch array is shown in Fig. 3a.
In Fig. 3b, measured and simulated S11 of the 4-element concave linear array is
presented. Measured results are in near agreement with the simulated results.
The physical structure and dimensions of the proposed 4-element and 16-element
concave planar antennas are shown in Fig. 4a, b. The length and width of the concave
patch are 9 mm and 10 mm, respectively and the height of the substrate is 1.6 mm for
both 4-element and 16-element planar arrays. The length and the width of the FR4
814 N. K. Darimireddy et al.
Fig. 3 a Fabricated concave linear 4-element array, b comparison of simulated and measured S11
for concave linear 4-element array
Epoxy material used are 80 mm and 80 mm, respectively, and ground plane length
and width are 80 mm and 80 mm, respectively, for 4-element planar array. The length
and the width of the FR4 Epoxy material used are 140 mm and 80 mm, respectively,
and ground plane length and width are 100 mm and 80 mm, respectively, for 16-
element planar array. For the proposed planar arrays, the depth of concavity (d) is
1.25 mm. The dimensions of the proposed concave 4-element planar array are W1
Fig. 4 Design of the proposed a concave 4-element planar array, b concave 16-element planar
array
Analysis of Effect of Concavity in Linear and Planar Microstrip … 815
Fig. 5 Design of a rectangular 4-element planar array, b rectangular 16-element planar array
1 mm, W2 15 mm, W3 2 mm, L1 7.5 mm, and L2 27.5 mm as designated
in Fig. 4a. The proposed concave 16-element planar array has the dimensions of
W1 30 mm, L1 7.5 mm, and L2 50 mm. Rectangular planar array for 4
elements and 16 elements with similar L and W are also simulated. Rectangular
planar array with 4 elements is presented in Fig. 5a and 16 elements in Fig. 5b.
For various values of concavity depth (d), the S11 for 4-element planar array is
observed and it is shown in Fig. 6a. For d 0 mm, that is, in the rectangular planar
4-element array, S11 obtained is −17.68 dB. For concave planar 4-element array, S11
for concavity d 1 mm is −18.57 dB. For concavity d 1.25 mm, S11 is −25.75 dB
and for concavity d 1.5 mm, S11 is −19.72 dB. From the simulation results, it is
evident that as concavity is increasing (from d 0 to 1.25 mm), the percentage of S11
is increased by 45.64% as a result of a decrease in the mutual coupling. If concavity
is increased beyond a value, the mutual coupling is increased. Figure 6b shows the
radiation pattern of concave 4-element planar array and the peak gain obtained is
9.2 dBi.
816 N. K. Darimireddy et al.
Fig. 6 a S11 of 4-element planar array for various concavities, b radiation pattern of 4-element
planar array for concavity d 1.25 mm
For different values of concavity depth (d), the S11 with respect to frequency for the
16-element planar array is simulated and presented in Fig. 7a. For d 0 mm, that is,
in the rectangular planar 16-element array, S11 obtained is −11.68 dB at 10.2 GHz
and −14.9 dB at 13 GHz. For concave planar 4-element array, S11 for concavity
d 1 mm is −15.7 dB at 10.4 GHz and −17.88 dB at 13 GHz. For concavity d
1.25 mm, S11 is −15.46 dB at 10.3 GHz and −17.91 dB at 12.9 GHz and for
concavity d 1.5 mm, S11 is −16.57 dB at 10.3 GHz and −17.85 dB at 12.9 GHz.
From the simulation results, it is evident that as concavity is increasing (from d
0 to 1.25 mm), S11 is increased by 25.77% as a result of a decrease in the mutual
coupling. If concavity is increased beyond a value, the mutual coupling is increased.
Figure 7b shows the radiation pattern of the concave planar 16-element array and the
peak gain obtained is 9.57 dBi.
Fig. 7 a S11 of the concave planar 16-element array for various concavities, b radiation pattern of
the concave planar 16-element array for concavity d 1.25 mm
Analysis of Effect of Concavity in Linear and Planar Microstrip … 817
Fig. 8 Fabricated structures of the concave planar antenna with a 4-element array, b 16-element
array
The fabricated concave planar 4-element array and 16-element array are shown in
Fig. 8.
The comparison between measured and simulated S11 of 4-element and 16-element
concave planar array between 8.5 GHz and 10 GHz are presented in Fig. 9a and
Fig. 9b, respectively. Measured results are in near agreement with simulated results.
Simulated S11 for the 4-element concave linear array, 4-element and 16-element
concave planar array are presented in Fig. 10 to compare the effect of mutual coupling
on linear and planar arrays. From the results, it is evident that the mutual coupling
effect is less in linear arrays compared to planar arrays.
818 N. K. Darimireddy et al.
Fig. 9 a Measured and simulated S11 of 4-element planar array, b measured and simulated S11 of
concave planar 16-element array
5 Conclusions
The benefits of an array can be fully achieved by reducing the mutual coupling effect
among the radiators. Arrays with concave structure can reduce the mutual coupling.
It is evident from the results, as concavity is increasing (from d 0 to 1.25 mm),
the mutual coupling is decreasing. Beyond a certain value of concavity, the mutual
coupling increases resulting in reduced S11. For the proposed 4-element concave
linear array, 4-element and 16-element concave planar array, the S11 obtained is
−33.18 dB, −25.75 dB and −17.91 dB, respectively. For similar dimensions of rect-
angular 4-element linear array, 4-element and 16-element planar array, S11 obtained
is −15.72 dB, −17.68 dB and −14.9 dB, respectively. The proposed linear and planar
antenna arrays are simulated using HFSS and tested practically using vector network
analyzer. It is evident from the results that the mutual coupling can be reduced by
selecting suitable values of concavity. It is also evident that the mutual coupling
Analysis of Effect of Concavity in Linear and Planar Microstrip … 819
effect is less in linear arrays compared to planar arrays. Measured results are in close
agreement with the simulated results.
References
1. Balanis CA (2008) Modern antenna hand book. John, New York, NY, USA
2. Mohanna S, Farahbakhsh A, Tavakoli S, Ghassemi N (2010) Reduction of mutual coupling and
S11 in microstrip array antennas using concave rectangular patches. Int J Microw Sci Technol
Hindawi corp.
3. Farahbakhsh S, Mohanna H, Tavakoli S, Oukati-Sadegh M New patch configurations to reduce
the mutual coupling in microstrip array antenna. In: Proceedings of the antennas and propagation
conference (LAPC ’09). Loughborough, UK, pp 469–472, Nov 2009
4. Nikoli´c MM, Djordjevi´c AR, Nehorai A (2005) Microstrip antennas with suppressed radiation
in horizontal directions and reduced coupling. IEEE Trans Antennas Propag 53(11):3469–3476
5. Parthasarathy K (2006) Mutual coupling in patch antenna arrays. M.S. thesis, University of
Cincinnati
6. Yazdi SR, Chamaani S, Arash Ahmadi S (2015) Mutual coupling reduction in microstrip phased
array using stacked-patch reduced surface wave antenna. In: IEEE International symposium on
antennas and propagation & USNC/URSI national radio science meeting, pp 436–437
7. Yoon Y-M, Koo H-M, Kim T-Y, Kim B-G (2012) Effect of edge reflections on the mutual
coupling of a two-element linear microstrip patch antenna array positioned along the E-plane.
IEEE Antennas Wirel Propag Lett 11:783–786
8. Yousefzadeh Naser, Ghobadi Changiz, Kamyab Manouchehr (2006) Consideration of mutual
coupling in a microstrip patch array using fractal elements. Prog Electromagn Res 66:41–49
9. Sun Xu-bao, Cao Mao Yong (2017) Low mutual coupling antenna array for WLAN application.
Electron Lett 53(6):368–370
Author Index
A B. Sreenivasulu, 729
Afroz Fatima, 271 B. Srinivas, 729
Ajay D. Nagne, 45, 517, 525 Budidi Udaya Kumar, 659
A. Jhansi Rani, 181
Akula Susmitha, 443 C
Ali Mirza Mahmood, 145 C. C. Anthony, 495
A. Mallikarjuna Prasad, 83, 567 Chanamala Vijay, 401
Amarsinh B. Varpe, 517, 525 Ch. Murali Krishna, 299
Amir M. U. Wagdarikar, 597 C. Vijaya, 433
Amogh Raut, 317 C. Vinothraj, 257
Amol D. Vibhute, 45, 247, 505, 517, 525 Cyril Joe Baby, 55
Amrsinh B. Varpe, 247, 505
Anirudh Itagi, 55 D
Anju Damodaran, 337 D. B. V. Jagannadham, 777
Anupam K. Swain, 793 Debasish Nayak, 235
A. Ramesh Babu, 213 Dennis Joseph, 409
Arshad Ahmad Khan Mohammad, 145 Dhananjay B. Nalawade, 247, 505, 525
A. Sailaja, 729 Dhruv Tyagi, 317
A. V. Dehankar, 577 D. Indra Jagadeesh, 587
A. Vijayakumari, 257 Divyasree Mallepalle, 617
Ayanavalli Ramadevi, 421 D. J. Nagendra Kumar, 421
D. Koteswar, 777
B D. Preethi, 607
Bandan Kumar Bhoi, 23 D. Saravanan, 409
B. Anuradha, 107 D. Sindhanaiselvi, 127, 137, 465
Barooru Lakshmi Malleswari, 535 D. Siva Patro, 803
Basheer Ali Sheik, 375 D. Vamsi Raju, 671
Beulah Sujan, 137 D. V. Sai Narayana, 777
B. Girirajan, 453
Bharati Bidikar, 767 G
Bhogadula Yogitha, 443 G. Anand Kumar, 703
Birendra Biswal, 235, 793, 803 Ganesh Pakle, 33
Brahmjit Singh, 683 G. Keerthika, 443
H N
Hanumant R. Gite, 247, 505 Nakkella Madhuri, 421
N. Bhalaji, 171
J N. Deepika Rani, 693
J. Arokia Renjit, 99 Neeraj Kumar Misra, 23
Jayachandra Prasad Talari, 617 Nelapati Ananda Rao, 279, 485
Joy Anil Gomes, 337 N. Kalaiyazhagan, 127
Jyoti Ranjan Sahoo, 793 N. K. Darimireddy, 811
N. Sujitha, 199
K N. Udaya Kumar, 349
K. Anusudha, 749
Karbhari V. Kale, 45, 189, 247, 505, 517, 525 P
K. Babulu, 559 Parminder Kaur Birdi, 189
K. Bala Sindhuri, 349 Parnapalli Sreesudha, 535
Keshab Nath, 289 P. Ashok Kumar, 443, 587
K. Girija Sravani, 443, 587 P. Ganesan, 453
K. Jagadeesh Babu, 83 P. Illavarason, 99
K. Kabilan, 171 P. James Vijay, 299
K. Kishore Kumar, 713 P. Kiranmayi, 349
K. K. Yadav, 495 P. Krishna Kanth Varma, 299
Koushik Guha, 587 P. Lavanya, 567
K. P. Bharath, 55, 65, 317, 327 P. Mohan Kumar, 99
K. Rasool Reddy, 671 Poonam Jindal, 367
K. Renu, 161 P. Rajesh Kumar, 161
K. Satyaprasad, 693 Prakasam Periasamy, 311
K. Sridevi, 181 Preetam Satapathy, 327
K. Srinivasa Rao, 545, 587 Prema Kumar Navuri, 475
K. Srinivas Rao, 443 P. Siva Kumar, 693
Kunal Routaray, 793 P. Sivagami, 223
K. Vasu Babu, 107 P. Srinivasa Rao, 83
K. V. Ramesh, 729 P. Sudheesh, 91
Pullela S. V. V. S. R. Kumar, 421
L P. Velazhagan, 213
L. M. I. Leo Joseph, 453 P. V. Rama Raju, 375
P. V. Sridevi, 375, 703
M
Mahesh M. Solankar, 45, 247, 505 R
Makesh Iyer, 627 Rajaram Pichamuthu, 311
Manoj Kumar Merugumalla, 475 Rajesh K. Dhumal, 45, 517, 525
Manoranjan Pradhan, 23 Rajkumar Goswami, 767
M. Arunpandian, 641 Ramchandra Manthalkar, 33
M. D. Vijayakumar, 213 Ramannolla Meena, 1
Minchula Vinodh Kumar, 401 Ranjan K. Senapati, 597
M. Jayakumar, 91 Ravi Nirlakalla, 617
M. Kamaraju, 559 R. Harikrishnan, 223
M. Nanda Kumar, 741 Ritam Sil, 65
Author Index 823