0% found this document useful (0 votes)
11 views

optimal control

The document discusses classical and modern control theories, highlighting the differences in their approaches to system representation and feedback mechanisms. It further delves into optimization in control systems, categorizing it into static and dynamic optimization, and outlines various performance indices for optimal control systems. The formulation of optimal control problems requires a mathematical model, performance index specification, and boundary conditions, with a focus on minimizing errors and achieving desired system states.

Uploaded by

Caleb fikadu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

optimal control

The document discusses classical and modern control theories, highlighting the differences in their approaches to system representation and feedback mechanisms. It further delves into optimization in control systems, categorizing it into static and dynamic optimization, and outlines various performance indices for optimal control systems. The formulation of optimal control problems requires a mathematical model, performance index specification, and boundary conditions, with a focus on minimizing errors and achieving desired system states.

Uploaded by

Caleb fikadu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

• 1.

1 Classical and Modern Control The classical (conventional) control theory


concerned with single input and single output (8180) is mainly based on Laplace
transforms theory and its use in system representation in block diagram form. From
Figure 1.1, we see that

Classical Control Configuration


• where s is Laplace variable and we used

• (1.1.2)
• Note that
• 1. the input u(t) to the plant is determined by the error e(t) and the
compensator, and
• 2. all the variables are not readily available for feedback. In most cases
only one output variable is available for feedback.
• The modern control theory concerned with multiple inputs and multiple
outputs (MIMO) is based on state variable representation in terms of a set of
first order differential (or difference) equations. Here, the system (plant) is
characterized by state variables, say, in linear, time invariant form as

• where, dot denotes differentiation with respect to (w.r.t.) t, x(t), u(t),and y( t)


are n, r, and m dimensional state, control, and output vectors respectively,
and A is nxn state, B is nxr input, Cis mxn output, and D is mxr transfer
matrices. Similarly, a nonlinear system is characterized by
• The modern theory dictates that all the state variables should be fed back after
suitable weighting.
• We see from Figure 1.2 that in modern control configuration,
1.the input u( t) is determined by the controller (consisting of error detector and
compensator) driven by system states x(t) and reference signal r ( t ) ,
2. all or most of the state variables are available for control, and
3. it depends on well-established matrix theory, which is amenable for large
scale computer simulation.
Modern Control Configuration
components of a Modern Control System
Optimization
• Optimization is a very desirable feature in day-to-day life.
• We like to work and use our time in an optimum manner, use resources
optimally and so on.
• The subject of optimization is quite general in the sense that it can be viewed
in different ways depending on the approach (algebraic or geometric), the
interest (single or multiple), the nature of the signals (deterministic or
stochastic), and the stage (single or multiple) used in optimization.
• As we notice that the calculus of variations is one small area of the big picture
of the optimization field, and it forms the basis for our study of optimal
control systems.
• Further, optimization can be classified as static optimization and dynamic
optimization.
1. Static Optimization is concerned with controlling a plant under steady state
conditions, i.e., the system variables are not changing with respect to time.
• The plant is then described by algebraic equations. Techniques used are
ordinary calculus, Lagrange multipliers, linear and nonlinear programming.
2. Dynamic Optimization concerns with the optimal control of plants under
dynamic conditions, i.e., the system variables are changing with respect to
time and thus the time is involved in system description.
• Then the plant is described by differential(or difference) equations.
Techniques used are search techniques, dynamic programming, variational
calculus (or calculus of variations) and Pontryagin principle.
Optimal Control
• The main objective of optimal control is to determine control signals that will
cause a process (plant) to satisfy some physical constraints and at the same
time extremize (maximize or minimize) a chosen performance criterion
(performance in

• dex or cost function).

• Referring to Figure 1.2, we are interested in finding the optimal control u*(t)
(* indicates optimal condition) that will drive the plant P from initial state to
final state with some constraints on controls and states and at the same time
extremizing the given performance index J.
The formulation of optimal control problem requires :

1. A mathematical description (or model) of the process to be controlled


(generally in state variable form),
2. A specification of the performance index, and
3. A statement of boundary conditions and the physical constraints on
the states and/or controls.
1 Plant
• For the purpose of optimization, we describe a physical plant by a set of linear
or nonlinear differential or difference equations. For example, a
• linear time-invariant system is described by the state and output relations and
a nonlinear system .
2 Performance Index
• Classical control design techniques have been successfully applied to linear,
time-invariant, single-input, single output (SISO) systems.
• Typical performance criteria are system time response to step or ramp input
characterized by rise time, settling time, peak overshoot, and steady state
accuracy; and the frequency response of the system characterized by gain and
phase margins, and bandwidth.
• In modern control theory, the optimal control problem is to find a
control which causes the dynamical system to reach a target or follow
a state variable (or trajectory) and at the same time extremize a
performance index which may take several forms as described below.

1. Performance Index for Time-Optimal Control System:


• We try to transfer a system from an arbitrary initial state x(to) to a
specified final state x( t f) in minimum time. The corresponding
performance index (PI) is
2. Performance Index for Fuel-Optimal Control System:

• Consider a spacecraft problem. Let u(t) be the thrust of a rocket engine


and assume that the magnitude I u( t) I of the thrust is proportional to
the rate of fuel consumption.
• In order to minimize the total expenditure of fuel, we may formulate
the performance index as

For several controls, we may write it as


here R is a weighting factor.
3.Performance Index for Minimum-Energy Control System:

• Consider Ui (t) as the current in the ith loop of an electric network.


• Then (where, ri is the resistance of the ith loop) is the total
power or the total rate of energy expenditure of the network.
• Then, for minimization of the total expended energy, we have a
performance criterion as

• or in general,
• here, R is a positive definite matrix and prime (') denotes transpose here
and throughout this book (see Appendix A for more details on definite
matrices).
• Similarly, we can think of minimization of the integral of the squared
error of a tracking system.
• We then have,

• where, Xd(t) is the desired value, xa(t) is the actual value, and
x(t) = xa(t) - Xd(t), is the error.
• Here, Q is a weighting matrix, which can be positive semi-definite.
4.Performance Index for Terminal Control System:
• In a terminal target problem, we are interested in minimizing the error
between the desired target position Xd (tf) and the actual target position Xa
(tf) at the end of the maneuver or at the final time tf.
• The terminal (final) error is x ( tf) = Xa ( tf) - Xd ( tf ). Taking care of
positive and negative values of error and weighting factors, we structure the
cost function as

• Which is also called the terminal cost function. Here, F is a positive semi-
definite matrix.
6.Performance Index for General Optimal Control System:

• Combining the above formulations, we have a performance index in


general form as
• where, R is a positive definite matrix, and Q and F are positive semidefinite
matrices, respectively.
• Note that the matrices Q and R may be time varying.
• The particular form of above performance index is called quadratic (in
terms of the states and controls) form.
The problems arising in optimal control are classified based on the structure
of the performance index J.
3 Constraints
• The control u( t) and state x( t) vectors are either unconstrained or
constrained depending upon the physical situation.
• The unconstrained problem is less involved and gives rise to some
elegant results.
• From the physical considerations, often we have the controls and states,
such as currents and voltages in an electrical circuit, speed of a motor,
thrust of a rocket, constrained as

• where, +, and - indicate the maximum and minimum values the variables
can attain.
• We are interested in minimizing the error of the system; therefore, when
the desired state vector is represented as x(J = 0, we are able to consider
the error as identically equal to the value of the state vector.
• That is, we intend the system to be at equilibrium, x = x^ = 0, and any
deviation from equilibrium is considered an error.
• Therefore, in this section, we will consider the design of optimal control
systems using state variable feedback and error-squared performance
indices.
• The control system we will consider is shown in Figure 11.14 and can be
represented by the vector differential equation x = Ax + Bu.
• (11.27)
• We will select a feedback controller so that u is some function of the
measured state variables x and therefore u = -k(x).
• The choice of the control signals is somewhat arbitrary and depends partially
on the actual desired performance and the complexity of the feedback
structure allowable.
• Often, we are limited in the number of state variables available for feedback,
since
we are only able to use measurable state variables.
• In our case, we limit the feedback function to a linear function so that u = —
Kx, where K is an m X n matrix.
• Therefore, in expanded form, we have
where H is the n X n matrix resulting from the addition of the elements of A and - BK.
Now, returning to the error-squared performance index, we recall from Section 5.7 that
the index for a single state variable, Xn, is written as

A performance index written in terms of two state variables would then be

Since we wish to define the performance index in terms of an integral of the sum of the
state variables squared, we will use the matrix operation
• Then the specific form of the performance index, in terms of the state
vector, is
• The general form of the performance index (Equation 11.26)
incorporates a term with u that we have not included at this point, but
we will do so later in this section.
• Again considering above Equation, we will let the final time of
interest be tf = oo.
• To obtain the minimum value of J, we postulate the existence of an
exact differential so that
• Where P is to be determined. A symmetric P matrix will be used to
simplify the algebra without any loss of generality.
• Then, for a symmetric P matrix, Pij=Pji
• Completing the differentiation indicated on the left-hand side of above
Equation , we have
• In the evaluation of the limit at t - oo, we have assumed that the system is
stable, and hence x(oo) = 0, as desired.
• Therefore, to minimize the performance index J, we consider the two equations

• The design steps are then as follows:


1. Determine the matrix P that satisfies Equation (11.41), where H is known.
2. Minimize J by determining the minimum of Equation (11.40) by adjusting one
or more unspecified system parameters.

You might also like