Control Engineering III
Control Engineering III
Introduction
A linear system designed to perform satisfactorily when excited by a standard test
signal, will exhibit satisfactory behavior under any circumstances. Furthermore, the
amplitude of the test signal is unimportant since any change in input signal amplitude
results simply in change of response scale with no change in the basic response
characteristics. The stability of nonlinear systems is determined solely by the location of
the system poles & is independent entirely of whether or not the system is driven.
In contrast to the linear case, the response of nonlinear systems to a particular test
signals is no guide to their behavior to other inputs, since the principle of superposition no
longer holds. In fact, the nonlinear system response may be highly sensitive to the input
amplitude. Here the stability dependent on the input & also the initial state. Further, the
nonlinear systems may exhibit limit cycle which are self sustained oscillations of fixed
frequency & amplitude.
𝑀𝑥 + 𝑓𝑥 + 𝐾𝑥 = 𝐹𝑐𝑜𝑠 𝑤𝑡 … … … … … … … . . (3.1)
64
Fig. 3.2 Frequency response curve of spring-mass-damper system
Let us now assume that the restoring force of the spring is nonlinear,given by 𝐾1 𝑥 +
𝐾2 𝑥 3 .The nonlinear spring characteristic is shown in Fig.3.1(b). Now the system equation
becomes
𝑀𝑥 + 𝑓𝑥 + 𝐾1 𝑥 + 𝐾2 𝑥 3 = 𝐹𝑐𝑜𝑠 𝑤𝑡 … … … … … … . (3.2)
The frequency response curve for the hard spring(𝐾2 > 0) is shown in Fig3.3(a).
For a hard spring, as the input frequency is gradually increased from zero, the measured
two response follows the curve through the A, B and C, but at C an increment in frequency
results in discontinuous jump down to the point D, after which with further increase in
frequency, the response curve follows through DE. If the frequency is now decreased, the
response follows the curve EDF with a jump up to B from the point F and then the
response curve moves towards A. This phenomenon which is peculiar to nonlinear systems
is known as jump resonance. For a soft spring, jump phenomenon will happen as shown in
fig. 3.3(b).
Methods of Analysis
Nonlinear systems are difficult to analyse and arriving at general conclusions are tedious.
However, starting with the classical techniques for the solution of standard nonlinear
differential equations, several techniques have been evolved which suit different types of
analysis. It should be emphasised that very often the conclusions arrived at will be useful
for the system under specified conditions and do not always lead to generalisations. The
commonly used methods are listed below.
65
Linearization Techniques:
In reality all systems are nonlinear and linear systems are only approximations of the
nonlinear systems. In some cases, the linearization yields useful information whereas in
some other cases, linearised model has to be modified when the operating point moves
from one to another. Many techniques like perturbation method, series approximation
techniques, quasi-linearization techniques etc. are used for linearise a nonlinear system.
Phase Plane Analysis:
This method is applicable to second order linear or nonlinear systems for the study of the
nature of phase trajectories near the equilibrium points. The system behaviour is
qualitatively analysed along with design of system parameters so as to get the desired
response from the system. The periodic oscillations in nonlinear systems called limit cycle
can be identified with this method which helps in investigating the stability of the system.
Describing Function Analysis:
This method is based on the principle of harmonic linearization in which for certain class
of nonlinear systems with low pass characteristic. This method is useful for the study of
existence of limit cycles and determination of the amplitude, frequency and stability of
these limit cycles. Accuracy is better for higher order systems as they have better low pass
characteristic.
Classification of Nonlinearities:
The nonlinearities are classified into
i) Inherent nonlinearities and
ii) Intentional nonlinearities.
The nonlinearities which are present in the components used in system due to the inherent
imperfections or properties of the system are known as inherent nonlinearities. Examples
are saturation in magnetic circuits, dead zone, back lash in gears etc. However in some
cases introduction of nonlinearity may improve the performance of the system, make the
system more economical consuming less space and more reliable than the linear system
designed to achieve the same objective. Such nonlinearities introduced intentionally to
improve the system performance are known as intentional nonlinearities. Examples are
different types of relays which are very frequently used to perform various tasks.
Saturation: This is the most common of all nonlinearities. All practical systems, when
driven by sufficiently large signals, exhibit the phenomenon of saturation due to limitations
of physical capabilities of their components. Saturation is a common phenomenon in
magnetic circuits and amplifiers.
66
Fig. 3.4 Piecewise linear approximation of saturation nonlinearity
Friction: Retarding frictional forces exist whenever mechanical surfaces come in sliding
contact. The predominant frictional force called the viscous friction is proportional to the
relative velocity of sliding surfaces. Vicous friction is thus linear in nature. In addition to
the viscous friction, there exist two nonlinear frictions. One is the coulomb friction which
is constant retarding force & the other is the stiction which is the force required to initiate
motion.The force of stiction is always greater than that of coulomb friction since due to
interlocking of surface irregularities,more force is require to move an object from rest than
to maintain it in motion.
67
Dead zone: Some systems do not respond to very small input signals. For a particular
range of input, the output is zero. This is called dead zone existing in a system. The input-
output curve is shown in figure.
Figure 3.7: (a) gear box having backlash (b) the teeth A of the driven gear located midway
between the teeth B1, B2 of the driven gear(c) gives the relationship between input and
output motions.
As the teeth A is driven clockwise from this position, no output motion takes place until
the tooth A makes contact with the tooth B1 of the driven gear after travelling a distance
68
x/2. This output motion corresponds to the segment mn of fig3.7 (c). After the contact is
made the driven gear rotates counter clockwise through the same angle as the drive gear, if
the gear ratio is assumed to be unity. This is illustrated by the line segment no. As the input
motion is reversed, the contact between the teeth A and B1 is lost and the driven gear
immediately becomes stationary based on the assumption that the load is friction controlled
with negligible inertia.
The output motion therefore causes till tooth A has travelled a distance x in the reverse
direction as shown in fig3.7 (c) by the segment op. After the tooth A establishes contact
with the tooth B2, the driven gear now mores in clockwise direction as shown by segment
pq. As the input motion is reversed the direction gear is again at standstill for the segment
qr and then follows the drive gear along rn.
Relay: A relay is a nonlinear power amplifier which can provide large power amplification
inexpensively and is therefore deliberately introduced in control systems. A relay
controlled system can be switched abruptly between several discrete states which are
usually off, full forward and full reverse. Relay controlled systems find wide applications
in the control field. The characteristic of an ideal relay is as shown in figure. In practice a
relay has a definite amount of dead zone as shown. This dead zone is caused by the facts
that relay coil requires a finite amount of current to actuate the relay. Further, since a larger
coil current is needed to close the relay than the current at which the relay drops out, the
characteristic always exhibits hysteresis.
Figure3.8:Relay Non Linearity (a) ON/OFF (b) ON/OFF with Hysteresis (c) ON/OFF
with Dead Zone
69
Multivariable Nonlinearity: Some nonlinearities such as the torque-speed characteristics
of a servomotor, transistor characteristics etc., are functions of more than one variable.
Such nonlinearities are called multivariable nonlinearities.
Introduction
Phase plane analysis is one of the earliest techniques developed for the study of second
order nonlinear system. It may be noted that in the state space formulation, the state
variables chosen are usually the output and its derivatives. The phase plane is thus a state
plane where the two state variables x1 and x2 are analysed which may be the output
variable y and its derivative 𝑦. The method was first introduced by Poincare, a French
mathematician. The method is used for obtaining graphically a solution of the following
two simultaneous equations of an autonomous system.
𝑥1 = 𝑓1 𝑥1 , 𝑥2
𝑥2 = 𝑓2 𝑥1 , 𝑥2
𝑥1 = 𝑥2
𝑥2 = 𝑓2 𝑥1 , 𝑥2
The curve described by the state point 𝑥1 , 𝑥2 in the phase plane with time as running
parameter is called phase trajectory.The plot of the state trajectories or phase trajectories
of above said equation thus gives an idea of the solution of the state as time t evolves
without explicitly solving for the state. The phase plane analysis is particularly suited to
second order nonlinear systems with no input or constant inputs. It can be extended to
cover other inputs as well such as ramp inputs, pulse inputs and impulse inputs.
Phase Portraits
From the fundamental theorem of uniqueness of solutions of the state equations or
differential equations, it can be seen that the solution of the state equation starting from an
initial state in the state space is unique. This will be true if 𝑓1 𝑥1 , 𝑥2 and 𝑓2 𝑥1 , 𝑥2 are
analytic. For such a system, consider the points in the state space at which the derivatives
of all the state variables are zero. These points are called singular points. These are in fact
equilibrium points of the system. If the system is placed at such a point, it will continue to
lie there if left undisturbed. A family of phase trajectories starting from different initial
states is called a phase portrait. As time t increases, the phase portrait graphically shows
how the system moves in the entire state plane from the initial states in the different
70
regions. Since the solutions from each of the initial conditions are unique, the phase
trajectories do not cross one another. If the system has nonlinear elements which are piece-
wise linear, the complete state space can be divided into different regions and phase plane
trajectories constructed for each of the regions separately.
Nodal Point: Consider eigen values are real, distinct and negative as shown in figure3.9
𝜆2
(a). For this case the equation of the phase trajectory follows as 𝑧2 = 𝑐 𝑧1 𝜆 1 Where c is
an integration constant . The trajectories become a set of parabola as shown in figure 3.9(b)
and the equilibrium point is called a node. In the original system of coordinates, these
trajectories appear to be skewed as shown in figure 3.9(c).
If the eigen values are both positive, the nature of the trajectories does not change, except
that the trajectories diverge out from the equilibrium point as both z1(t) and z2(t) are
increasing exponentially. The phase trajectories in the x1-x2 plane are as shown in figure3.9
(d). This type of singularity is identified as a node, but it is an unstable node as the
trajectories diverge from the equilibrium point.
71
(d) Unstable node in (X1,X2)-plane
Fig. 3.9
Saddle Point: Both eigen values are real,equal & negative of each other.The
corresponding phase portraits are shown in Fig 3.10. The origin in this case a saddle point
which is always unstable, one eigen value being positive.
Fig 3.10
Focus Point: Consider a system with complex conjugate eigen values. A plot for negative
values of real part is a family of equiangular spirals. Certain transformation has been
carried out for 𝑥1 , 𝑥2 to 𝑦1 , 𝑦2 to present the trajectory in form of a true spiral. The
origin which is a singular point in this case is called a stable focus. When the eigen values
are complex conjugate with positive real parts, the phase portrait consists of expanding
spirals as shown in figure and the singular point is an unstable focus. When transformed
into the x1-x2 plane, the phase portrait in the above two cases is essentially spiralling in
nature, except that the spirals are now somewhat twisted in shape.
72
Fig 3.11
73
where ζ and ωn are the damping factor and undamped natural frequency of the
system.Defining the state variables as x = x1 and 𝑥 = 𝑥2 , we get the state equation in the
state variable form as
𝑥1 = 𝑥2
𝑥2 = −𝑤𝑛 2 𝑥1 − 2𝜉𝑤𝑛 𝑥2
These equations may then be solved for phase variables x1 and x2. The time response plots of
x1, x2 for various values of damping with initial conditions can be plotted. When the
differential equations describing the dynamics of the system are nonlinear, it is in general not
possible to obtain a closed form solution of x1, x2. For example, if the spring force is
nonlinear say (k1x + k2x3) the state equation takes the form
𝑥1 = 𝑥2
𝑘1 𝑓 𝑘2
𝑥2 = − 𝑥1 − 𝑥2 − 𝑥1 3
𝑀 𝑀 𝑀
Solving these equations by integration is no more an easy task. In such situations, a graphical
method known as the phase-plane method is found to be very helpful. The coordinate plane
with axes that correspond to the dependent variable x1 and x2 is called phase-plane. The curve
described by the state point (x1,x2) in the phase-plane with respect to time is called a phase
trajectory. A phase trajectory can be easily constructed by graphical techniques.
Isoclines Method:
𝑑𝑥2 𝑓2 𝑥1 , 𝑥2
= =𝑀
𝑑𝑥1 𝑓1 𝑥1 , 𝑥2
Therefore, the locus of constant slope of the trajectory is given by f2(x1,x2) = Mf1(x1,x2)
The above equation gives the equation to the family of isoclines. For different values of M, the
slope of the trajectory, different isoclines can be drawn in the phase plane. Knowing the value of
M on a given isoclines, it is easy to draw line segments on each of these isoclines.
Dividing the above equations we get the slope of the state trajectory in the x1-x2 plane as
𝑑𝑥2 𝑥2 − 𝑥1
= =𝑀
𝑑𝑥1 𝑥2
74
For a constant value of this slope say M, we get a set of equations
−1
𝑥2 = 𝑥
𝑀+1 1
which is a straight line in the x1-x2 plane. We can draw different lines in the x1-x2 plane for
different values of M; called isoclines. If draw sufficiently large number of isoclines to cover
the complete state space as shown, we can see how the state trajectories are moving in the
state plane. Different trajectories can be drawn from different initial conditions. A large
number of such trajectories together form a phase portrait. A few typical trajectories are
shown in figure3.13 given below.
Fig. 3.13
The Procedure for construction of the phase trajectories can be summarised as below:
1.For the given nonlinear differential equation, define the state variables as x1 and x2 and
obtain the state equations as
𝑥1 = 𝑥2
𝑥2 = 𝑓2 𝑥1 , 𝑥2
𝑑𝑥2 𝑓(𝑥1 , 𝑥2 )
= =𝑀
𝑑𝑥1 𝑥2
75
4. On each of the isoclines, draw small line segments with a slope M.
5. From an initial condition point, draw a trajectory following the line segments With slopes
M on each of the isoclines.
Delta Method:
The delta method of constructing phase trajectories is applied to systems of the form
𝑥 + 𝑓 𝑥, 𝑥 , 𝑡 = 0
Where 𝑓 𝑥, 𝑥 , 𝑡 may be linear or nonlinear and may even be time varying but must be
continuous and single valued.
With the help of this method, phase trajectory for any system with step or ramp or any time
varying input can be conveniently drawn. The method results in considerable time saving
when a single or a few phase trajectories are required rather than a complete phase portrait.
While applying the delta method, the above equation is first converted to the form
𝑥 + 𝑤𝑛 [𝑥 + 𝛿(𝑥, 𝑥 , 𝑡)] = 0
In general 𝛿(𝑥, 𝑥, 𝑡) depends upon the variables 𝑥, 𝑥 𝑎𝑛𝑑 𝑡 but for short intervals the changes in
these variables are negligible. Thus over a short interval, we have
𝑥 + 𝑤𝑛 [𝑥 + 𝛿] = 0, where δ is a constant.
𝑥1 = 𝑤𝑛 𝑥2
𝑥2 = −𝑤𝑛 (𝑥1 + 𝛿)
𝑑𝑥2 −𝑥1 + 𝛿
=
𝑑𝑥1 𝑥2
With δ known at any point P on the trajectory and assumed constant for a short interval, we can
draw a short segment of the trajectory by using the trajectory slope dx2/dx1 given in the above
equation. A simple geometrical construction given below can be used for this purpose.
Example : For the system described by the equation given below, construct the trajectory
starting at the initial point (1, 0) using delta method.
𝑥 + 𝑥 + 𝑥2 = 0
76
Let = 𝑥1 𝑎𝑛𝑑 𝑥 = 𝑥2 , then
𝑥1 = 𝑥2
𝑥2 = 𝑥2 − 𝑥12
𝑥2 = −(𝑥1 + 𝑥2 + 𝑥12 − 𝑥1 )
So that
𝛿 = 𝑥2 + 𝑥21 − 𝑥1
Fig.3.14
Limit Cycles:
Limit cycles have a distinct geometric configuration in the phase plane portrait, namely, that
of an isolated closed path in the phase plane. A given system may have more than one limit
cycle. A limit cycle represents a steady state oscillation, to which or from which all
trajectories nearby will converge or diverge. In a nonlinear system, limit cycles describes the
amplitude and period of a self sustained oscillation. It should be pointed out that not all
closed curves in the phase plane are limit cycles. A phase-plane portrait of a conservative
77
system, in which there is no damping to dissipate energy, is a continuous family of closed
curves. Closed curves of this kind are not limit cycles because none of these curves are
isolated from one another. Such trajectories always occur as a continuous family, so that there
are closed curves in any neighborhoods of any particular closed curve. On the other hand,
limit cycles are periodic motions exhibited only by nonlinear non conservative systems.
As an example, let us consider the well known Vander Pol‟s differential equation
𝑑2 𝑥 𝑑𝑥
2
− 𝜇(1 − 𝑥 2 ) +𝑥 =0
𝑑𝑡 𝑑𝑡
The figure shows the phase trajectories of the system for μ > 0 and μ < 0. In case of μ > 0 we
observe that for large values of x1(0), the system response is damped and the amplitude of x 1(t)
decreases till the system state enters the limit cycle as shown by the outer trajectory. On the other
hand, if initially x1(0) is small, the damping is negative, and hence the amplitude of x 1(t)
increases till the system state enters the limit cycle as shown by the inner trajectory. When μ < 0,
the trajectories moves in the opposite directions as shown in figure3.15.
A limit cycle is called stable if trajectories near the limit cycle, originating from outside or inside,
converge to that limit cycle. In this case, the system exhibits a sustained oscillation with constant
amplitude. This is shown in figure (i). The inside of the limit cycle is an unstable region in the
sense that trajectories diverge to the limit cycle, and the outside is a stable region in the sense that
trajectories converge to the limit cycle.
A limit cycle is called an unstable one if trajectories near it diverge from this limit cycle. In this
case, an unstable region surrounds a stable region. If a trajectory starts within the stable region, it
converges to a singular point within the limit cycle. If a trajectory starts in the unstable region, it
diverges with time to infinity as shown in figure (ii). The inside of an unstable limit cycle is the
stable region, and the outside the unstable region.
78
Describing Function Method of Non Linear Control System
Describing function method is used for finding out the stability of a non linear system. Of
all the analytical methods developed over the years for non linear control systems, this
method is generally agreed upon as being the most practically useful. This method is
basically an approximate extension of frequency response methods including Nyquist
stability criterion to non linear system.
The describing function method of a non linear system is defined to be the complex ratio of
amplitudes and phase angle between fundamental harmonic components of output to input
sinusoid. We can also called sinusoidal describing function. Mathematically,
Let us consider the below block diagram of a non linear system, where G1(s) and G2(s)
represent the linear element and N represent the non linear element.
Let us assume that input x to the non linear element is sinusoidal, i.e,
For this input, the output y of the non linear element will be a non sinusoidal periodic
function that may be expressed in terms of Fourier series as
79
Most of non linearities are odd symmetrical or odd half wave symmetrical; the mean value Y0
for all such case is zero and therefore output will be,
As G1(s) G2(s) has low pass characteristics , it can be assumed to a good degree of
approximation that all higher harmonics of y are filtered out in the process, and the input x to
the nonlinear element N is mainly contributed by fundamental component of y i.e. first
harmonics . So in the describing function analysis, we assume that only the fundamental
harmonic component of the output. Since the higher harmonics in the output of a non linear
system are often of smaller amplitude than the amplitude of fundamental harmonic
component. Most control systems are low pass filters, with the result that the higher
harmonics are very much attenuated compared with the fundamental harmonic component.
Hence y1 need only be considered.
80
Describing Function for Saturation Non Linearity
We have the characteristic curve for saturation as shown in the given figure3.16
On substituting the value of the output in the above equation and integrating the function
from 0 to 2π we have the value of the constant A1 as zero.
81
Similarly we can calculate the value of Fourier constant B1 for the given output and the value
of B1 can be calculated as,
82
Describing Function for Ideal Relay
We have the characteristic curve for ideal relay as shown in the given figure 3.17.
On substituting the value of the output in the above equation and integrating the function
from 0 to 2π we have the value of the constant A1 as zero.
Similarly we can calculate the value of Fourier constant B1 for the given output and the value
of B1 can be calculated as
83
On substituting the value of the output in the above equation y(t) = Y we have the value of
the constant B1
And the phase angle for the describing function can be calculated as
We have the characteristic curve for real realy as shown in the given figure 3.18. If X is less
than dead zone Δ, then the relay produces no output; the first harmonic component of Fourier
series is of course zero and describing function is also zero. If X > &Delta, the relay produces
the output.
84
Fig. 3.18. Characteristic Curve for Real Relay Non Linearities.
85
On substituting the value of the output in the above equation and integrating the function
from 0 to 2π we have the value of the constant A1 as zero.
Similarly we can calculate the value of Fourier constant B for the given output and the value
of B can be calculated as
86
Let us take input function as
On substituting the value of the output in the above equation and integrating the function
from zero to 2π we have the value of the constant A1 as
Similarly we can calculate the value of Fourier constant B for the given output and the value
of B1 can be calculated as
On substituting the value of the output in the above equation and integrating the function
from zero to pi we have the value of the constant B1 as
We can easily calculate the describing function of backlash from below equation
87
Liapunov’s Stability Analysis
Consider a dynamical system which satisfies
We will assume that f(x, t) satisfies the standard conditions for the existence and uniqueness
of solutions. Such conditions are, for instance, that f(x, t) is Lipschitz continuous with respect
to x, uniformly in t, and piecewise continuous in t. A point 𝑥 ∗ ∈ 𝑅𝑛 is an equilibrium point of
equation (3.3) if F(x*, t) ≡ 0.
Intuitively and somewhat crudely speaking, we say an equilibrium point is locally stable if all
solutions which start near x* (meaning that the initial conditions are in a neighborhood of 𝑥 ∗
remain near 𝑥 ∗ for all time.
The equilibrium point x* is said to be locally Asymptotically stable if x* is locally stable and,
furthermore, all solutions starting near x* tend towards x* as t → ∞.
We say somewhat crude because the time-varying nature of equation (3.3) introduces all
kinds of additional subtleties. Nonetheless, it is intuitive that a pendulum has a locally stable
equilibrium point when the pendulum is hanging straight down and an unstable equilibrium
point when it is pointing straight up. If the pendulum is damped, the stable equilibrium point
is locally asymptotically stable. By shifting the origin of the system, we may assume that the
equilibrium point of interest occurs at x* = 0. If multiple equilibrium points exist, we will
need to study the stability of each by appropriately shifting the origin.
The equilibrium point x* = 0 of (3.3) is stable (in the sense of Lyapunov) at t = t0 if for any
𝜖 > 0 there exists a δ(t0, 𝜖 ) > 0 such that
Lyapunov stability is a very mild requirement on equilibrium points. In particular, it does not
require that trajectories starting close to the origin tend to the origin asymptotically. Also,
stability is defined at a time instant t0. Uniform stability is a concept which guarantees that
the equilibrium point is not losing stability. We insist that for a uniformly stable equilibrium
point x*, δ in the Definition 3.1 not be a function of t0, so that equation (3.4) may hold for all
t0. Asymptotic stability is made precise in the following definition:
88
Uniform asymptotic stability requires:
Finally, we say that an equilibrium point is unstable if it is not stable. This is less of a
tautology than it sounds and the reader should be sure he or she can negate the definition of
stability in the sense of Lyapunov to get a definition of instability. In robotics, we are almost
always interested in uniformly asymptotically stable equilibria. If we wish to move the robot
to a point, we would like to actually converge to that point, not merely remain nearby. Figure
below illustrates the difference between stability in the sense of Lyapunov and asymptotic
stability.
Definitions 3.1 and 3.2 are local definitions; they describe the behavior of a system near an
equilibrium point. We say an equilibrium point x* is globally stable if it is stable for all initial
conditions 𝑥0 ∈ 𝑅𝑛 . Global stability is very desirable, but in many applications it can be
difficult to achieve. We will concentrate on local stability theorems and indicate where it is
possible to extend the results to the global case. Notions of uniformity are only important for
time-varying systems. Thus, for time-invariant systems, stability implies uniform stability
and asymptotic stability implies uniform asymptotic stability.
89
Basic theorem of Lyapunov
Let V (x, t) be a non-negative function with derivative 𝑉 along the trajectories of the system.
1. If V (x, t) is locally positive definite and 𝑉 (x, t) ≤ 0 locally in x and for all t, then the
origin of the system is locally stable (in the sense of Lyapunov).
2. If V (x, t) is locally positive definite and decrescent, and 𝑉 (x, t) ≤ 0 locally in x and for all
t, then the origin of the system is uniformly locally stable (in the sense of Lyapunov).
3. If V (x, t) is locally positive definite and decrescent, and − 𝑉 (x, t) is locally positive
definite, then the origin of the system is uniformly locally asymptotically stable.
4. If V (x, t) is positive definite and decrescent, and − 𝑉 (x, t) is positive definite, then the
origin of the system is globally uniformly asymptotically stable.
Theorm-1
Suppose there exists a scalar function v(x) which for some real number ∈> 0 satisfies the
following properties for all x in the region 𝑥(𝑡) < 𝜖
Theorem-2
𝑑𝑣
If the property of (d) of theorem-1 is replaced with (d) 𝑑𝑡 < 0 , 𝑥 ≠ 0 (i.e dv/dt is negative
definite scalar function),then the system is asymptotically stable.
It is intuitively obvious since continuous v function>0 except at x=0, satisfies the condition
dv/dt <0, we except that x will eventually approach the origin .We shall avoid the rigorous of
this theorem.
Theorem-3
𝑉 𝑥 → ∞ 𝑎𝑠 𝑥 → ∞
90
Instability
It may be noted that instability in a nonlinear system can be established by direct recourse to
the instability theorem of the direct method .The basic instability theorem is presented below:
Theorem-4
Consider a system
𝑥=𝑓 𝑥 ; 𝑓 0 =0
Suppose there a exist a scalar function W(x) which, for real number ∈> 0 , satisfies the
following properties for all x in the region 𝑋 < 𝜖 ;
(a) W(x)>0; 𝑥 ≠ 0
(b) W (0) = 0
(c)W(x) has continuous partial derivatives with respect to all component of x
𝑑𝑊
(d) ≥0
𝑑𝑡
Then the system is unstable at the origin.
In case of linear systems, the direct method of liapunov provides a simple approach to
stability analysis. It must be emphasized that compared to the results presented, no new
results are obtained by the use of direct method for the stability analysis of linear systems.
However, the study of linear systems using the direct method is quite useful because it
extends our thinking to nonlinear systems.
Consider a linear autonomous system described by the state equation
X = AX … … … … … … … … … . (3.6)
The linear system is asymptotically stable in-the-large at the origin if and only if given any
symmetric, positive definite matrix Q, there exists a symmetric positive definite matrix P
which is the unique solution
AT P + PA = −Q … … … … … … (3.7)
Proof
To prove the sufficiency of the result of above theorem, let us assume that a symmetric
positive definite matrix P exists which is the unique solution of eqn.(3.8). Consider thescalar
function.
And
91
V X = X T PX + X T PX
In order to show that the result is also necessary, suppose that the system is asymptotically
stable and P is negative definite, consider the scalar function
V 𝐗 = 𝐗 𝐓 𝐏𝐗 … … … … … … … … … … (3.8)
Therefore
V X = − 𝐗 𝐓 𝐏𝐗 + 𝐗 𝐓 𝐏𝐗
= 𝐗 𝐓 𝐐𝐗
>0
There is contradiction since V(x) given by eqn. (3.8) satisfies instability theorem.
Thus the conditions for the positive definiteness of P are necessary and sufficient for
asymptotic stability of the system of eqn. (3.6).
As has been said earlier ,the liapunov theorems give only sufficient conditions on system
stability and furthermore there is no unique way of constructing a liapunov function except in
the case of linear systems where a liapunov function can always be constructed and both
necessary and sufficient conditions Established .Because of this draw back a host of methods
have become available in literature and many refinements have been suggested to enlarge the
region in which the system is found to be stable. Since this treatise is meant as a first
exposure of the student to the liapunov direct method, only two of the relatively simpler
techniques of constructing a liapunov‟s function would be advanced here.
Krasovskii’s method
Consider a system
𝑥=𝑓 𝑥 ; 𝑓 0 =0
92
V = 𝐟 𝐓 𝐏𝐟 … … … … … … … … … (3.9)
∂f ∂X
f= = Jf
∂X ∂t
V = 𝐟 𝐓 𝐉 𝐓 𝐏𝐟 + 𝐟 𝐓 𝐏𝐉𝐟
= 𝐟 𝐓 (𝐉 𝐓 𝐏 + 𝐏𝐉)𝐟
Let
𝐐 = 𝐉 𝐓 𝐏 + 𝐏𝐉
Since V is positive definite, for the system to be asymptotically stable, Q should be negative
definite. If in addition 𝑋 → ∞ 𝑎𝑠 𝑋 → ∞ , the system is asymptotically stable in-the-
large.
93
POPOV CRITERION
94
95
96
MODEL QUESTIONS
Module-3
1. Explain how jump phenomena can occur in a power frequency circuit. Extend this
concept to show that a ferro resonant circuit can be used to stabilize wide
fluctuations in supply voltage of a.c. mains in a CVT(constant voltage
transformer).
2. Explain various types of equilibrium points encountered in non-linear systems and
draw approximately the phase plane trajectories.
3. Bring out the differences between Liapunov‟s stability criterion and Popov‟s
stability criterion.
4. Explain what do you understand by limit cycle?
97
(b) Draw the phase trajectory for the system described by the following differential
equation
𝑑2 𝑋 𝑑𝑋
+ 0.6 +𝑋 =0
𝑑𝑡 2 𝑑𝑡
𝑑𝑋
With X(0)=1 and 𝑑𝑡 0 = 0. [5]
6. Investigate the stability of the equilibrium state for the system governed by:
𝑑𝑋1
= −3𝑋1 + 𝑋2
𝑑𝑡
𝑑𝑋2
= 𝑋1 − 𝑋2 − 𝑋2 3 [7]
𝑑𝑡
7. Distinguish between the concepts of stability, asymptotic stability and global
stability. [3]
8. Write short notes on [3.5×6]
(a) Signal stabilisation
(b) Delta method of drawing phase trajectories
(c) Phase plane portrait
(d) Jump resonance in non linear closed loop system
(e) Stable and unstable limit cycle
(f) Popov s stability criterion
𝑑2𝑥
+ sin 𝑥 = 0.707
𝑑𝑡 2
𝜋
Draw the phase plane trajectory when the initial conditions are 𝑥 0 = 3 , 𝑥 0 = 0.
Use phase plane 𝛿 method. Compute x vs. t till t=0.1 sec. [8]
10. Determine the amplitude and frequency of oscillation of the limit cycle of the system
shown in Figure below. Find the stability of the limit cycle oscillation. [16]
11. Write short notes on Popov‟s stability criterion and its geometrical interpretation. [4]
98
12. Derive the expression for describing function of the following non-linearity as shown
in figure below. [14]
14. What do you mean by sign definiteness of a function? Check the positive definiteness
of
2𝑋 2
V X = x12 + 1+𝑋22 [4]
2
15. Distinguish between the concepts of stability, asymptotic stability & global stability.
[4]
16. (a) What are singular points in a phase plane? Explain the following types of
singularity with sketches: [9]
Stable node, unstable node, saddle point, stable focus, unstable focus, vortex.
(b) Obtain the describing function of N(x) in figure below. Derive the formula used.
[6]
17. (a) Evaluate the describing function of the non linear element shown in figure below.
[6]
99
(b) This non linear element forms part of a closed loop system shown in fig below.
Making use of the describing function analysis. Determine the frequency amplitude and
stability of any possible self oscillation. [10]
18. (a) Explain of the method of drawing the trajectories in the phase plane using [10]
i) Lienard‟s construction
ii)Pell‟s method
(b) A second order non linear system is described by [6]
2
𝑥 +25(1+0.1 𝑥 )𝑥 = 0
Using delta method obtain the first five points in the phase plane for initial condition
X(0)==1.8 𝑥(0)=-1.6
19. (a) In the following quadratic form negative definite? [5]
2 2 2
Q=-𝑥1 -3𝑥2 -11𝑥3 + 2𝑥1 𝑥2 -4𝑥2 𝑥3 -2𝑥1 𝑥3
(b) State and prove Liapunov‟s theorem for asymptotic stability of the system 𝑥=A x
Hence show the following linear autonomous model [6+5]
0 1
𝑥= x
−𝑘 −𝑎
Is asymptotically stable if a>0,k>0.
20. (a) Bring out the differnces between Liapunov‟s stability criterion and Popov‟s stability
criterion. [5]
(b) 𝑥1 =𝑥2
100
24. (a) A non-linear system is governed by
𝑑2 𝑥
+ 8𝑥 − 4𝑥 2 = 0
d𝑡 2
Determine the singular point s and their nature. Plot the trajectory passing through((𝑋1 =
2, 𝑋2 = 0) without any approximation.
(b) What are the limitations of phase-plane analysis. [12+3]
25.(a) Find the describing function of the following type of non linearities. [8]
i)ideal on off relay
ii)ideal saturation
(b) Derive a Liapunov function for the defined by [8]
𝑥1=𝑥2
𝑥2 = −3𝑥1 2 − 3 𝑥2
Also check the stability of the system.
26.(a) Determine the singular points in the phase plane and sketch the plane trajectories for a
system of characteristics equation
𝑑 2 𝑥(𝑡)
+ 8𝑥 𝑡 − 4𝑥 2 𝑡 = 0 [8]
𝑑𝑡 2
(b) A system described by the system shown in fig below
Will there be a limit cycle? If so determine its amplitude and frequency. [8]
101
MODULE-IV
OPTIMAL CONTROL SYSTEMS
Introduction:
There are two approaches to the design of control systems. In one approach we select the
configuration of the overall system by introducing compensators to meet the given
specifications on the performance. In other approach, for a given plant we find an overall
system that meets the given specifications & then compute the necessary compensators.
The classical design based on the first approach, the designer is given a set of specifications
in time domain or in frequency domain & system configuration. Compensators are selected
that give as closely as possible, the desired system performance. In general, it may not be
possible to satisfy all the desired specifications. Then, through a trial & error procedure, an
acceptable system performance is achieved.
The trial & error uncertainties are eliminated in the parameter optimization method. In
parameter optimization procedure, the performance specification consists of a single
performance index. For a fixed system configuration, parameters that minimize the
performance index are selected.
J=f(K1,K2,….,Kn) ……………….(1)
(ii) Determine the solution set Ki of the equations
𝜕𝐽
= 0; 𝑖 = 1,2, … … . . 𝑛 … … … … … … … … … . (2)
𝜕𝐾𝑖
Equation (2) give the necessary conditions for J to be minimum.
Sufficient conditions
From the solution set of equation(2), find the subset that satisfies the sufficient conditions
which require that the Hessian matrix given below is positive definite.
2 2 2
𝜕 𝐽 𝜕 𝐽 𝜕 𝐽
… …
𝜕𝐾21 𝜕𝑘1 𝜕𝑘2 𝜕𝑘1 𝜕𝑘𝑛
2 2 2
𝜕 𝐽 𝜕 𝐽 𝜕 𝐽
𝐻= ……. … … … … … (3)
𝜕𝑘2 𝜕𝑘1 𝜕𝐾22 𝜕𝑘2 𝜕𝑘𝑛
……. …….. ………
2 2 2
𝜕 𝐽 𝜕 𝐽 𝜕 𝐽
……
𝜕𝑘𝑛 𝜕𝑘1 𝜕𝑘𝑛 𝜕𝑘2 𝜕𝐾2𝑛
102
𝜕2𝐽 𝜕2𝐽
Since = , the matrix H is always symmetric.
𝜕𝑘 𝑖 𝜕𝑘 𝑗 𝜕𝑘 𝑗 𝜕𝑘 𝑖
(iii) If there are two or more sets of Ki satisfying the necessary as well as sufficient
conditions of minimization given by equations (2) & (3) respectively, then
compute the corresponding J for each set.
The set that has the smallest J gives the optimal parameters.
The minimization problem will be more easily solved if we can express performance index
interms of transform domain quantities.
The quadratic performance index, this can be done by using the Parseval‟s theorem which
allows us to write
∞ 𝑗∞
1
𝑥 2 𝑡 𝑑𝑡 = 𝑋 𝑠 𝑋 −𝑠 𝑑𝑠 … … … … … … … … (4)
2𝜋𝑗
0 −𝑗 ∞
The values of right hand integral in equation(4) can easily be found from the published tables,
provided that X(s) can be written in the form
Where A(s) has zeros only in the left half of the complex plane.
𝑏02
𝐽1 =
2𝑎0 𝑎1
𝑏12 𝑎0 + 𝑏02 𝑎2
𝐽2 =
2𝑎0 𝑎1 𝑎2
𝑏22 𝑎0 𝑎1 + 𝑏12 − 2𝑏0 𝑏2 𝑎0 𝑎3 + 𝑏02 𝑎2 𝑎3
𝐽3 =
2𝑎0 𝑎3 (−𝑎0 𝑎3 + 𝑎1 𝑎2 )
The design objective in a servomechanism or tracking problem is to keep error e(t) small. So
∞
performance index 𝐽 = 0 𝑒 2 (𝑡) 𝑑𝑡 is to be minimized if control u(t) is not constrained in
magnitude.
103
EXAMPLE
100 1
Referring to the block diagram given below, consider 𝐺 𝑠 = 𝑠 2 and 𝑠 = 𝑠 .
∞
Determine the optimal value of parameter K such that 𝐽 = 0
𝑒 2 (𝑡) 𝑑𝑡 is minimum.
Solution
G(s) 100 s 2 100
H s = = =
1 + G s Ks 1 + 100 Ks s + 100K
s2
𝐸 𝑠 1
=
𝑅 𝑠 1+𝐻 𝑠
𝑅(𝑠) 1 𝑠 𝑠 + 100𝑘
⇒𝐸 𝑠 = = = 2
1+𝐻 𝑠 100
1 + 𝑠 + 100𝐾 𝑠 + 100𝑘𝑠 + 100
The optimal design of servo systems obtained by minimizing the performance index
∞
𝐽= 𝑒 2 (𝑡) 𝑑𝑡 … … … … . (5)
0
may be unsatisfactory because it may lead to excessively large magnitudes of some control
signals.
A more realistic solution to the problem is reached if the performance index is modified to
account for physical constraints like saturation in physical devices. Therefore, a more realistic
PI should be to minimize
∞
𝐽= 𝑒 2 (𝑡) 𝑑𝑡 … … … … . (6)
0
Subject to the constraint
𝑚𝑎𝑥 𝑢(𝑡) ≤ 𝑀 … … … … … … . (6𝑎)
104
If criterion given by equation(6) is used, the resulting optimal system is not necessarily a
linear system; i.e in order to implement the optimal design, nonlinear &/or time-varying
devices are required. So performance criterion given by equation (6) is replaced by the
following quadratic PI:
∞
𝐽= [𝑒 2 𝑡 + 𝜆𝑢2 𝑡 ] 𝑑𝑡 … … … … . (7)
0
𝐽= 𝑢2 (𝑡) 𝑑𝑡 … … … … . (8)
0
& the optimal system that minimizes this J is one with u=0.
From these two extreme cases, we conclude that if λ is properly chosen, then the constraint of
(6a) will be satisfied.
EXAMPLE
100 1
𝐺 𝑠 =
2
; 𝑅 𝑠 =
𝑠 𝑠
Determine the optimal values of the parameters K1 & K2 such that
∞
(i) 𝐽𝑒 = 0 𝑒 2 (𝑡) 𝑑𝑡 is minimized
∞
(ii) 𝐽𝑢 = 0
𝑢2 (𝑡) 𝑑𝑡 = 0.1
Solution
100
K1 G(s) K1 2 K1 100 s
H s = = s =
1 + K1 G(s)K 2 s 1 + K1 100K s s + K1 K 2 100
2
s2
𝐸 𝑠 1 1 s 2 + K1 K 2 s100
= = = 2
𝑅 𝑠 1+𝐻 𝑠 K 100 s + K1 K 2 s100 + K1 100
1+ 2 1
s + K1 K 2 100
1
𝑠 𝑠 + K1 K 2 100 𝑠 + K1 K 2 100
⇒𝐸 𝑠 = 2 𝑠 = 2
s + K1 K 2 s100 + K1 100 s + K1 K 2 s100 + K1 100
105
As E(s) is of 2nd order, so PI
∞
2
𝑏1 2 𝑎0 + 𝑏0 2 𝑎2 1 + 100K1 𝐾2 2
𝐽𝑒2 = 𝑒 (𝑡) 𝑑𝑡 = =
2𝑎0 𝑎1 𝑎2 200K1 K 2
0
100K1 100
𝐶 𝑠 = = 2 𝑈(𝑠)
𝑠 s2
+ K1 K 2 s100 + K1 100 𝑠
𝑠K1
𝑈 𝑠 = 2
s + K1 K 2 s100 + K1 100
𝜕𝐽
= 0 𝑓𝑜𝑟 𝑖 = 1,2 𝑔𝑖𝑣𝑒𝑠
𝜕𝐾𝑖
𝜆𝐾1 2 = 1 … … … … . . (𝑏)
100K1 𝐾2 2 − 1 − 𝜆𝐾1 2 = 0 … … … . . (𝑐)
For 𝐾1 = 2, 𝐾2 = 0.1
1
0
𝐻 = 80 is positive definite
0 5
It is a special case of tracking problem in which r(t)=0. For zero input, the output is zero if all
the initial conditions are zero. The response c(t) is due to non zero initial conditions that,in
turn, are caused by disturbances. The primary objective of the design is to damp out the
response due to initial conditions quickly without excessive overshoot & oscillations.
106
For example:- The disturbance torque of the sea causes the ship to roll. The response(roll
angle Ө(t)) to this disturbance is highly oscillatory. The oscillations in the rolling motion are
to be damped out quickly without excessive overshoot.
If there is no constraint on “control effort”, the controller which minimizes the performance
index.
∞
2
𝐽= 𝜃 𝑡 − 𝜃𝑑 (𝑡) 𝑑𝑡 … … … … … … … (9)
0
Will be optimum.
Therefore, the problem of stabilization of ship against rolling motion is a regular problem. If for
disturbance torque applied at t=to , the controller is required to regulate the roll motion which is
finite time (tf-to), a suitable performance criterion for design of optimum controller is to minimize
𝑡𝑓
𝐽= 𝜃 2 (𝑡) 𝑑𝑡 … … … … . (10)
𝑡𝑜
X t = AX t + Bu(t) … … … … … … … … … . (11)
Find the control function u* which is optimal with respect to given performance criterion.
When a system variable x1(t)(the output) is required to be near zero, the performance measure is
107
𝑡𝑓
𝐽= 𝑥1 2 (𝑡) 𝑑𝑡
𝑡𝑜
A performance index written interms of two state variables of a system would be then
𝑡𝑓
𝐽= (𝑥1 2 𝑡 + 𝑥2 2 (𝑡))𝑑𝑡
𝑡𝑜
Therefore if the state x(t) of a system described by equation (11) is required to close to X d=0,
a design criterion would be to determine a control function that minimizes
𝑡𝑓
𝐽= (𝑋 𝑇 𝑋 ) 𝑑𝑡
𝑡𝑜
In practical, the control of all the states of the system is not equally important.
Example: In addition to roll angle 𝜃(𝑡) of aship,the pitch angle ∅(𝑡) is also required to be
zero,so the PI gets modified to
𝑡𝑓
𝐽= (𝜃 2 𝑡 + 𝜆∅2 (𝑡))𝑑𝑡
𝑡𝑜
The roll motion contributes much discomfort to passengers, in the design of passenger
ship,the value of 𝜆 will be less than one.
𝑡𝑓
A weighted PI is 𝐽= 𝑡𝑜
(𝑋 𝑇 𝑄𝑋 ) 𝑑𝑡
Where, Q=Error weighted matrix which is positive definite,real,symmetric,constant matrix.
𝑞1 0….. 0
0 𝑞2 … . 0
𝑄=
⋮ ⋮ ⋮
0 0….. 𝑞𝑛
The ith entry of Q represents the amount of weight the designer places on the constraint on
state variable xi(t). The larger the value of qi relative to other values of q,the more control
effort is spent to regulate xi(t).
To minimize the deviation of the final state 𝑋(𝑡𝑓 ) of the system from the desired state
𝑋𝑑 = 0, a possible performance measure is:
108
𝐽 = 𝑋 𝑇 (𝑡𝑓 )𝐹𝑋(𝑡𝑓 )
Where, F= Terminal cost weighted matrix which is positive definite, real, symmetric,
constant matrix.
In the infinite time state regulator problem 𝑡𝑓 → ∞ , the final state should approach the
equilibrium state X=0; so the terminal constraint is no longer necessary..
𝐽 = 𝑋 𝑇 𝑡𝑓 𝐹𝑋 𝑡𝑓 + 𝑋 𝑇 𝑡 𝑄𝑋 𝑡 𝑑𝑡
𝑡𝑜
If PI is modified by adding a penalty term for physical constraints, then solution would be
more realistic. So this is accomplished by introducing the quadratic control term in the PI
𝑡𝑓
𝐽= 𝑢𝑇 𝑡 𝑅 𝑢 𝑡 𝑑𝑡
𝑡𝑜
By giving sufficient weight to control terms, the amplitude of controls which minimize the
overall PI may be kept within practical bound, although at the expense of increased error in
X(t).
X t = AX t + Bu t … … … … … . . (12)
Find the optimal control law u*(t),𝑡 ∈ 𝑡0 , 𝑡𝑓 , where 𝑡0 & 𝑡𝑓 are specified initial & final
times respectively, so that the optimal PI
𝑡𝑓
1 1
𝐽 = 𝑋 𝑇 𝑡𝑓 𝐹𝑋 𝑡𝑓 + 𝑋 𝑇 𝑡 𝑄𝑋 𝑡 + 𝑢𝑇 𝑡 𝑅 𝑢 𝑡 𝑑𝑡 … … … … … … … . (13)
2 2
𝑡𝑜
Is minimized, subject to initial state x(t0)=x0 . tf is fixed & given & X(tf) is free.
109
The matrices Q & F may be positive definite or semidefinite. We shall assume that, both
Q & F are not simultaneously zero matrices to avoid trival solution.
Solution
P t = −P t A − AT P t − Q + P t BR−1 BT P t … … … … … … … . . (14)
Riccati equation is nonlinear & for this reason, we usually cannot obtain closed form of
solutions; therefore we must compute P(t) using digital computer.Numerical integration is
carried out backward in time; from t=tf to t=t0 with boundary condition P(tf)=F.
u∗ t = −R−1 BT P t X t = −K t X t … … … … … … … … … . . (15)
EXAMPLE
P t 𝑑𝑡 = −4P t − 3 + 4𝑃2 t 𝑑𝑡
𝑡𝑓 𝑡𝑓
110
𝑡 𝑡
3 1
⇒ 𝑑P t = 4 P t − P t + 𝑑𝑡
2 2
𝑡𝑓 𝑡𝑓
Separating variables
𝑡 𝑡
1
⇒ 𝑑P t = 𝑑𝑡
3 1
𝑡𝑓 4 P t − 2 P t +2 𝑡𝑓
3 1
1 P t −2 P t +2
⇒ 𝑙𝑛 = 𝑡 − 𝑡𝑓
8 3 1
−2 2
3
1 − 𝑒 8 𝑡−𝑡𝑓
⇒P t = 2
1 + 3𝑒 8 𝑡−𝑡𝑓
The Optimal control law
u∗ t = −R−1 BT P t X t = −4P t x t
The block diagram for Optimal control system is
(1) When 𝑡𝑓 → ∞ , 𝑋(∞) → 0 for the optimal system to be stable.Therefore the terminal
penalty term has no significance; consequently it does not appear in J i.e we set F=0 in
general quadratic PI.
111
−P A − AT P − Q + P BR−1 BT P = 0
(4) Solve ARE to get P ,then the optimal control law is given by
u∗ t = −R−1 BT P X t = −KX t
The optimal law is implemented using time invariant Kalman gain in contrast to the finite-
time case.
1 𝑇
𝐽∗ = 𝑋 0 P𝑋 0
2
EXAMPLE
𝐽= 𝑥1 2 + 𝑢2 𝑑𝑡
0
Simplifying
p12 2
− +2 =0
2
p12 p22
p11 − =0
2
p22 2
− + 2p12 = 0
2
2 2 2
For P to be positive definite matrix we get the solution P = ,
2 2 2
The Optimal Control Law is given by
112
1 2 2 2 𝑥1 (𝑡)
u∗ t = −R−1 BT P X t = − 0 1 = −𝑥1 𝑡 − 2𝑥2 (𝑡)
2 2 2 2 𝑥2 (𝑡)
It can be easily verified that closed loop system is asymptotically stable.(Though Q is
positive definite)
In the state regulator problem, we are concerned with making all the components of the state
vector X(t) small.In the output regulator problem on the other hand, we ere concerned with
making the components of the output vector small.
X t = AX t + Bu t … … … … … . . 17
Y t = CX t
Find the optimal control law u*(t),𝑡 ∈ 𝑡0 , 𝑡𝑓 , where 𝑡0 & 𝑡𝑓 are specified initial & final
times respectively, so that the optimal PI
𝑡𝑓
1 1
𝐽 = 𝑌 𝑇 𝑡𝑓 𝐹 𝑌 𝑡𝑓 + 𝑌 𝑇 𝑡 𝑄𝑌 𝑡 + 𝑢𝑇 𝑡 𝑅 𝑢 𝑡 𝑑𝑡 … … … … … … … (18)
2 2
𝑡𝑜
Tracking Problem
Consider an observable controlled process described by the equation(17). Suppose that the
vector Z(t) is the desired output.
Find the optimal control law u*(t),𝑡 ∈ 𝑡0 , 𝑡𝑓 , where 𝑡0 & 𝑡𝑓 are specified initial & final
times respectively, so that the optimal PI
𝑡𝑓
1 1
𝐽 = 𝑒 𝑇 𝑡𝑓 𝐹 𝑒 𝑡𝑓 + 𝑒 𝑇 𝑡 𝑄𝑒 𝑡 + 𝑢𝑇 𝑡 𝑅 𝑢 𝑡 𝑑𝑡
2 2
𝑡𝑜
Is minimized .
If the controlled process given by equation(17) is observable then, we can reduce the output
regulator problem to the state regulator problem.
113
Substituting Y(t)=CX(t) in the PI given by equation (18), we get
𝑡𝑓
1 1
𝐽 = 𝑋 𝑇 𝑡𝑓 𝐶 𝑇 𝐹𝐶𝑋 𝑡𝑓 + 𝑋 𝑇 𝑡 𝐶 𝑇 𝑄𝐶𝑋 𝑡 + 𝑢𝑇 𝑡 𝑅 𝑢 𝑡 𝑑𝑡
2 2
𝑡𝑜
u∗ t = −R−1 BT P t X t = −K t X t
Where P(t) is the solution of the matrix Riccati equation given by:
P t = −P t A − AT P t − CT QC + P t BR−1 BT P t … … … … … … … . . (19)
Here we shall study a class of tracking problems which are reducible to the form of the output
regulator problem.
X t = AX t + Bu t
Y t = CX t
It is desired to bring & keep output Y(t) close to the desired output r(t).
Find the optimal control law u*(t),𝑡 ∈ 𝑡0 , 𝑡𝑓 , where 𝑡0 & 𝑡𝑓 are specified initial & final
times respectively, so that the optimal PI
𝑡𝑓
1 𝑇 1
𝐽= 𝑒 𝑡𝑓 𝐹 𝑒 𝑡𝑓 + 𝑒 𝑇 𝑡 𝑄𝑒 𝑡 + 𝑢𝑇 𝑡 𝑅 𝑢 𝑡 𝑑𝑡
2 2
𝑡𝑜
Is minimized .
To reduce this problem to the form of output regulator problem,we consider only those r(t)
that can be generated by arbitrary initial conditions Z(0) in the system.
Z t = AZ t
r t = CZ t
The matrices A & C are same as those of the plant. Now define a new variable W=X-Z
114
Then
W t = AW t + Bu t
e t = CW t
Applying results of the output regulator problem gives immediately that the optimal control
for the tracking problem under consideration is
u∗ t = −R−1 BT P t W t = −K t [X − Z]
X t = AX t + BKX t = A + BK X(t)
Solution
(1) Determine elements of P as functions of the elements of the feedback matrix K from the
equations given below
A + BK T P + P A + BK + K T RK + Q = 0 … … … … … … . (20)
1
𝐽 = 𝑋 𝑇 0 𝑃 𝑋 0 … … … … … … . (21)
2
If K1,K2,….,Kn are the free elements of matrix P, we have
J=f(K1,K2,….,Kn) ....................................(22)
(3) The necessary & sufficient conditions for J to be minimum are given by
𝜕𝐽
= 0; 𝑖 = 1,2, … … . . 𝑛 (𝑛𝑒𝑐𝑒𝑠𝑠𝑎𝑟𝑦 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛)
𝜕𝐾𝑖
Hessian matrix is positive definite (sufficient condition)
115
Solution set Ki of equation(22) satisfies necessary & sufficient condition is obtained which
gives the suboptimal solution to the control problem. Of course, Ki must satisfy the further
constraint that the closed-loop system be asymptotically stable. If all the parameters of P are
free, the procedure above will yield an optimal solution.
Special Case
In this case the matrix P is obtained from the equation(20) by substituting R=0 resulting in
the modified matrix equation
A + BK T P + P A + BK + Q = 0 … … … … … … . (23)
EXAMPLE
Consider the second order system, where it is desired to find optimum 𝜁 which minimizes the
∞
integral square error i.e J= 0 𝑒 2 (𝑡) 𝑑𝑡 for the initial conditions c(0)=1,𝑐 0 = 0
Solution
The problem is reframed in the state form with one of obtaining feedback control law with
constraint K1=1
𝑥1 0 1 𝑥1 0
= + 𝑢, 𝑥1 0 = 1, 𝑥2 0 = 0
𝑥2 0 1 𝑥2 1
𝑥
u = − K1 K 2 𝑥1
2
∞ 2 ∞
Now 𝐽 = 0
𝑒 (𝑡) 𝑑𝑡 = 0 1
𝑥 2 𝑑𝑡 𝑠𝑖𝑛𝑐𝑒 𝑒 = −𝑐 = −𝑥1
2 0
Therefore 𝑄 =
0 1
116
Substituting the values in equation(23)
0 −1 p11 p12 p11 p12 0 1 2 0 0 0
+ p p22 −1 −K 2 + 0 =
0 −K 2 p12 p22 12 0 0 0
1+K 2 2
1
K2
Solving we get 𝑃 = 1
1 K2
1 1+K 2 2
PI 𝐽 = 2 𝑋 𝑇 0 𝑃 𝑋 0 = 2K 2
𝜕𝐽 1 1
J to be minimum, = − 2K 2 =0
𝜕𝐾2 2 2
⟹ K2 = 1
𝜕2𝐽 1
=K > 0 , this is satisfied for K 2 = 1
𝜕K 2 2 2
3
(1)Robust controllers (2) Adaptive controllers (3) Fuzzy logic controllers (4) Neural
controllers
117
Adaptive control
An adaptive control system may be thought of as having two loops. One loop is a normal
feedback with the process(plant) & controller. The other loop is a parameter
adjustment loop. The block diagram of an adaptive system is shown below. The
parameter adjustment loop is often slower than the normal feedback loop.
There are two main approaches for designing adaptive controllers. They are
The MRAC system is an adaptive system in which the desired performance is expressed
in terms of a reference model, which gives the desired response signal.
(b) A reference model for compactly specifying the desired output of the control system.
118
Fig: Model Reference Adaptive controller
(2) Control law: The control law is derived based on control performance criterion
optimization. Since the parameters are estimated on-line, the calculation of control law
is based on a procedure called certainty equivalent in which the current parameter
estimates are accepted while ignoring their uncertainties. This approach of designing
controller using estimated parameters of the transfer function of the process is known
as indirect self-tuning method.
119
MODEL QUESTIONS
Module-4
The figures in the right-hand margin indicate marks.
𝐽= (𝑋1 2 + 𝑈 2 ) 𝑑𝑡
0
For the system
𝑋1 0 1 𝑋1 0
= + u [10]
𝑋2 0 0 X 2 1
𝐽= (𝑦 2 + 𝑢2 ) 𝑑𝑡
0
𝑑𝑦
for the process described by +𝑦 =𝑢 [9]
𝑑𝑡
120
(ii) Find the minimum value of J
(iii) Find sensitivity of J with respect to k [15]
8. A linear autonomous system is described in the state equation
−4𝐾 4𝐾
𝑋= X
2𝐾 −6𝐾
Find restriction on the parameter k to guarantee stability of the system. [15]
9. A first order system is described by the differential equation
𝑋 𝑡 = 2𝑋 𝑡 + 𝑢(𝑡)
Find the control law that minimises the performance index
1 𝑡𝑓 1
J=2 0
(3𝑋 2 + 4 𝑢2 ) dt
When 𝑡𝑓 =1 second [15]
10.(a) What do you understand about parameter optimisation of regulator? [6]
(b) Find the control laws which minimises the performance index [10]
∞
J= 0 𝑥1 2 + 𝑢2 𝑑𝑡
For the system
𝑥1 0 1 𝑥1 0
= + 𝑢
𝑥2 0 1 𝑥2 1
121