0% found this document useful (0 votes)
22 views44 pages

Linear Control 0

This document provides an overview of linear control theory and its application to modeling dynamical systems. It discusses control theory concepts like open-loop and closed-loop feedback control. It also introduces modeling dynamical systems using linear ordinary differential equations and state-space models. As an example, it models a simple car system and expresses it as a linear system to demonstrate how linear algebra can be used to analyze linear dynamical systems.

Uploaded by

areeb ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views44 pages

Linear Control 0

This document provides an overview of linear control theory and its application to modeling dynamical systems. It discusses control theory concepts like open-loop and closed-loop feedback control. It also introduces modeling dynamical systems using linear ordinary differential equations and state-space models. As an example, it models a simple car system and expresses it as a linear system to demonstrate how linear algebra can be used to analyze linear dynamical systems.

Uploaded by

areeb ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

IITK CS659: Autonomous Cyber-Physical Systems:

Basics of Linear Control

Winter 2021
Instructors: Indranil Saha & Jyo Deshmukh

USC Viterbi
School of Engineering
Department of Computer Science
What is control theory?

 Engineering techniques and theory for control of dynamical systems


 Objective:
 Regulate system so that it behaves in desired fashion
 Ensure timely action to changes in system behavior
 Make sure control action does not cause system to behave erratically

USC Viterbi
School of Engineering
Department of Computer Science 2
Model-based Design
 Mathematical and visual method commonly used for design controllers
 Development is manifested in four steps:
 Create a dynamical system model of the plant
 Design/synthesize a controller (model)
 Simulate/test the closed-loop system for different environment
(exogenous) inputs
 Generate code from the controller model and deploy the controller

USC Viterbi
School of Engineering
Department of Computer Science 3
“Plant”
 Terminology inherited from the (chemical) process industry
 Can represent any physical component to be controlled, e.g. electrical,
electronic, mechanical, etc.
 Convenient to model such systems as ordinary differential equations (ODEs)
 State variables of ODEs represent physical quantities, e.g. pressure,
temperature, Velocity, acceleration, current, voltage, etc.

USC Viterbi
School of Engineering
Department of Computer Science 4
Open-loop vs. Closed-loop control
 Open-loop or feed-forward control
 Control action does not depend on plant output
 Most common form of control in many CPS 𝐮(𝑡) 𝐲(𝑡)
𝐰(𝑡)
applications! Controller Plant
 Pros:
 Cheaper, few sensors required, logic pretty 𝐰: Input variable (from the Environment)
straightforward 𝐮: Control Inputs to the Plant
𝐲: Plant Outputs
 Cons:
 Quality of control poor without human intervention
 Not adaptive!

USC Viterbi
School of Engineering
Department of Computer Science 5
Closed-loop or Feedback Control

 Controller adjusts controllable


inputs in response to observed
outputs 𝐰(𝑡) 𝐮(𝑡) 𝐲(𝑡)
 Can respond better to variations in ∑ Controller Plant
disturbances
 Performance depends on how well
outputs can be sensed, and how quickly
controller can track changes in output 𝐰: Input variable (from the environment)
 Many different flavors of feedback 𝐮: Control Input to the Plant
𝐲: Plant Output
control!

USC Viterbi
School of Engineering
Department of Computer Science 6
Feedback Control

 General mathematical model for feedback control

𝑑𝐱
ODE representing time evolution of plant dynamics = 𝑓 𝐱, 𝐮, 𝐰 𝐰: Inputs from the Environment
𝑑𝑡 𝐱: Plant state variable
𝐮: Control Inputs to the Plant
Equation mapping plant states to observed plant 𝐲 = 𝑔 𝐱, 𝐮 𝐲: Observations of the plant
outputs

(Stateless) Feedback Control 𝐮 = ℎ(𝐲)

USC Viterbi
School of Engineering
Department of Computer Science 7
Linear Control

 Linear systems are easier to analyze, understand, theorize

𝑑𝐱
= 𝐀𝐱 + 𝐁𝐮 + 𝐰
𝑑𝑡
𝐲 = 𝐂𝐱 + 𝑫𝐮
𝐮 = −𝐾𝐲

USC Viterbi
School of Engineering
Department of Computer Science 8
Linear Control
 Linear systems are easier to analyze, understand, theorize

𝑑𝐱
= 𝐀𝐱 + 𝐁𝐮 + 𝐰
𝑑𝑡
D: Often assumed to be zero
𝐲 = 𝐂𝐱 + 𝑫𝐮
For simplicity, let 𝐂 = 𝐈 (identity matrix),
𝐮 = −𝐾𝐲 i.e., the state is fully observable

 Above equations reduce to: 𝐱ሶ = 𝐀𝐱 − 𝐁𝐊𝐱 + 𝐰


 Consider some constant input 𝐰, which can be assumed w.l.o.g. to be zero,
then 𝐱ሶ = 𝐀 − 𝐁𝐊 𝐱

USC Viterbi
School of Engineering
Department of Computer Science 9
Linear ODEs through a simple car model

Force 𝐹 Position 𝑝
Velocity 𝑣

Friction 𝑘𝑣
𝑑2 𝑝 𝑑𝑝
Newton’s law of motion: 𝐹 − 𝑘𝑣 = 𝑚 𝑑𝑡 2 ; 𝑣 = 𝑑𝑡

USC Viterbi
School of Engineering
Department of Computer Science 10
Executions of Car
 Given an initial state (𝑝0 , 𝑣0 ) and an input signal 𝐹(𝑡), the execution of the
system is defined by state-trajectories 𝑝 𝑡 and 𝑣 𝑡 (from 𝕋 to ℝ) that
satisfy the initial-value problem:
𝑝 0 = 𝑝0 ; 𝑣 0 = 𝑣0
𝐹 𝑡 −𝑘𝑣 𝑡
 𝑝ሶ = 𝑣 𝑡 ; 𝑣ሶ =
𝑚

USC Viterbi
School of Engineering
Department of Computer Science 11
Sample Execution of Car
 Suppose ∀𝑡: 𝐹 𝑡 = 0, 𝑝0 = 5 m, 𝑣0 = 20 m/s. Then, we need to solve:
 𝑝 0 = 5; 𝑣 0 = 20
𝑘𝑣
 𝑝ሶ = 𝑣; 𝑣ሶ = −
𝑚
 Solution to above differential equation (solve for 𝑣 first, then 𝑥):
𝑘𝑡 𝑘𝑡
− 𝑚𝑣0 −
 𝑣 𝑡 = 𝑣0 𝑒 𝑚 ;𝑝 𝑡 = 1−𝑒 𝑚
𝑘
𝑚𝑣0 𝑚 = 1000kg
 Note, as 𝑡 → ∞, 𝑣 𝑡 → 0, and 𝑝 𝑡 →
𝑘 𝑘 = 50𝑁𝑠/𝑚

USC Viterbi
School of Engineering
Department of Computer Science 12
Sample Execution of Car with constant force
 Suppose ∀𝑡: 𝐹 𝑡 = 500 N, 𝑝0 = 5 m, 𝑣0 = 20 m/s. Then, we need to solve:
 𝑝 0 = 0; 𝑣 0 = 𝑣0
500−𝑘𝑣
 𝑝ሶ = 𝑣; 𝑣ሶ =
𝑚
 Solution to above differential equation (solve for 𝑣 first, then 𝑥):

𝑚 = 1000kg
𝑘 = 50𝑁𝑠/𝑚

USC Viterbi
School of Engineering
Department of Computer Science 13
Plots

Phase Plot

Input/Output Signals
USC Viterbi
School of Engineering
Department of Computer Science 14
Linear ODE through a simple car model
𝑑2𝑝
𝑚 2 = 𝐹 − 𝑘𝑣 𝑝ሶ = 𝑣
𝑑𝑡 Rewrite
𝑑𝑝 𝐹 − 𝑘𝑣
𝑣= 𝑣ሶ =
𝑑𝑡
𝑚
𝐱= 𝑝𝑣 , 𝑥1 = 𝑝, 𝑥2 = 𝑣

1 0
𝑥1ሶ 𝑘 𝐱+ 0
=
𝑥2ሶ 0 − F
𝑚

USC Viterbi
School of Engineering
Department of Computer Science 15
Linear Systems
 Equation of simple car dynamics can be written compactly as:
𝑥ሶ 0 1 𝑥 0
= + [𝐹]
𝑣ሶ 0 −𝑘/𝑚 𝑣 1/𝑚

0 1 0
 Letting 𝐴 = , 𝐵= , we can re-write above equation in the
0 −𝑘/𝑚 1
form:

 𝐱ሶ = 𝐴𝐱 + B𝐮, where 𝐱 = 𝑥 𝑣 , and 𝐮 = 𝐹

USC Viterbi
School of Engineering
Department of Computer Science 16
Linear Systems
 Functions 𝑓, 𝑔, ℎ in the closed-loop control formulation are all linear in their
arguments
 Linear algebra was invented to reason about linear systems!
 Linear systems have many nice properties:
 Many analysis methods in the frequency domain (using Fourier/Laplace
transform methods)
 Superposition principle (net response to two or more stimuli is the sum of
responses to each stimulus)

USC Viterbi
School of Engineering
Department of Computer Science 17
Solutions to Linear Systems
 Autonomous linear system has no inputs: 𝐱ሶ = 𝐴𝐱
 Solution of autonomous linear system can be fully characterized:
 𝐱 𝑡 = 𝑒 𝐴𝑡 𝐱 0
𝐴 𝐴2 𝐴3 𝐴4
 𝑒 is a matrix exponential = 𝐼 + 𝐴 + + + + …
𝐴 2! 3! 4!
 𝑒 is usually approximated to the first 𝑘 terms
 Computing 𝑒 𝐴 is easy if 𝐴 is a diagonal matrix (non-zero elements are only on the
diagonal)
 In practice, numerical integration methods outperform matrix exponential
 For a linear system with exogenous inputs?
𝑡
 𝑥 𝑡 = 𝑒 𝐴𝑡 𝑥0 + ‫𝑡 𝐴 𝑒 ׬‬−𝜏 𝐵𝑢 𝜏 𝑑𝜏
0
Can find analytic solutions for simpler classes of 𝑢 𝑡 , in
general use numerical integration techniques

USC Viterbi
School of Engineering
Department of Computer Science 18
Stability of Systems
 Property capturing the ability of a system to return to a quiescent state after
perturbation
 Stable systems recover after disturbances, unstable systems may not
 Almost always a desirable property for a system design
 Fundamental problem in control: design controllers to stabilize a system
Pole starts falling in this direction  Problem: Inverted Pendulum on a moving cart is inherently
unstable, aim: keep it upright
 Solution Strategy: Move cart in direction in the same direction
as the pendulum’s falling direction
Accelerate in this direction
 Design problem: Design a controller to stabilize the system by
computing velocity and direction for cart travel

USC Viterbi
School of Engineering
Department of Computer Science 19
Lyapunov stability
𝐱∗ 𝐱(0)
 System 𝐱ሶ = 𝑓 𝐱
 Equilibrium point when 𝑓 𝐱 is zero (say 𝐱 ∗ )
 Equilibrium point 𝐱 ∗ is Lyapunov-stable if:
 For every 𝜖 > 0,
There exists a 𝛿 > 0, such that

 if 𝐱 0 − 𝐱 < 𝛿, then,

 for every 𝑡 ≥ 0, we have 𝐱 𝑡 − 𝐱 <𝜖

𝜖-ball 𝛿-ball
USC Viterbi
School of Engineering
Department of Computer Science 20
Asymptotic + Exponential Stability
 System 𝐱ሶ = 𝑓 𝐱
 Equilibrium point 𝐱 ∗ is asymptotically-stable if:
 𝐱 ∗ is Lyapunov-stable +
 There exists 𝛿 > 0 s.t. if 𝐱 0 − 𝐱 ∗ < 𝛿, then lim ‖𝐱 𝑡 − 𝐱 ∗ ‖ = 0
𝑡→∞
 Equilibrium point 𝐱 ∗ is exponentially-stable if:
 𝐱 ∗ is asymptotically stable +
 There exist 𝛼 > 0, 𝛽 > 0 s.t. if 𝐱 0 − 𝐱 ∗ < 𝛿, then for all 𝑡 ≥ 0:
𝐱 𝑡 − 𝐱 ∗ ≤ 𝛼 𝐱 0 − 𝐱 ∗ 𝑒 −𝛽𝑡

USC Viterbi
School of Engineering
Department of Computer Science 21
What do these definitions all mean?
 Lyapunov stable: solutions starting 𝛿close from equilibrium point must
remain close (within 𝜖) forever
 Asymptotically stable: solutions not only remain close, but also converge to
the equilibrium
 Exponentially stable: solutions not only converge to the equilibrium, but in
fact converge at least as fast as a known exponential rate
 All stable linear systems are exponentially stable
 This need not be true for nonlinear systems!

USC Viterbi
School of Engineering
Department of Computer Science 22
Analyzing stability for linear systems

 Eigenvalues of a matrix 𝐴:
 Value 𝜆 satisfying the equation 𝐴𝐯 = λ𝐯. 𝐯 is called the eigenvector
 Equivalent to saying: values satisfying 𝐴 − 𝜆𝐼 = 0, where 𝐼 is the identity
matrix
 Interesting result for linear systems: System 𝐱ሶ = 𝐴𝐱 is asymptotically stable
if and only if every eigenvalue of 𝐴 has a negative real part
 Lyapunov stable if and only if every eigenvalue has non-positive real part

USC Viterbi
School of Engineering
Department of Computer Science 23
Simple Linear Feedback Control: Reference Tracking
𝐫(𝑡) 𝐮(𝑡) 𝐲(𝑡)
𝐱ሶ = 𝐴𝐱 + 𝐵𝐮
∑ 𝐮 = 𝐾(𝐫 − 𝐲)
+ 𝐲 = 𝐶𝐱 + 𝐷𝐮
− Controller Plant

 Common objective: make plant state track the reference signal 𝐫(𝑡)
 For convenience, let 𝐶 = 𝐼 (identity) and 𝐷 = 𝟎 (zero matrix), i.e. full state
is observable through sensors, and input has no immediate effect on output

USC Viterbi
School of Engineering
Department of Computer Science 24
Simple Linear Feedback Control: Reference Tracking
𝐫(𝑡) 𝐮(𝑡) 𝐱(𝑡)
∑ 𝐮 = 𝐾(𝐫 − 𝐱) 𝐱ሶ = 𝐴𝐱 + 𝐵𝐮
+
− Controller Plant For simplicity, assume we
are tracking the zero
signal, i.e. 𝐫(𝑡) = 0

 Closed-loop dynamics: 𝐱ሶ = 𝐴𝐱 + 𝐵𝐾 𝐫 − 𝐱 = 𝐴 − 𝐵𝐾 𝐱 + 𝐵𝐾𝐫


 Pick 𝐾 such that closed-loop system has desirable behavior
 To make closed-loop system stable, pick 𝐾 such that eigenvalues of
𝐴 − 𝐵𝐾 have negative real-parts
 Controller designed this way also called pole placement controller

USC Viterbi
School of Engineering
Department of Computer Science 25
Designing a pole placement controller
𝐫(𝑡) 𝐮(𝑡) 𝐱(𝑡)
1 1 1
∑ 𝐮 = 𝐾(𝐫 − 𝐱) 𝐱ሶ = 𝐱+ 𝐮
+ 1 2 0
− Controller Plant
 Note eigs 𝐴 = 0.382, 2.618 ⇒ unstable plant!
1 − 𝑘1 1 − 𝑘2
 Let 𝐾 = 𝑘1 𝑘2 . Then, 𝐴 − 𝐵𝐾 =
1 2
 eigs 𝐴 − 𝐵𝐾 satisfy equation 𝜆2 + 𝑘1 − 3 𝜆 + 1 − 2𝑘1 + 𝑘2 = 0
 Suppose we want eigenvalues at −5, −6, then equation would be: 𝜆2 + 11𝜆 + 30 = 0
 Comparing two equations, 𝑘1 − 3 = 11, and 1 − 2𝑘1 + 𝑘2 = 30
 This gives 𝑘1 = 14, 𝑘2 = 57. Thus controller with 𝐾 = 14 57 stabilizes the plant!

USC Viterbi
School of Engineering
Department of Computer Science 26
Linear Quadratic Regulator
 Pole placement involves heuristics (we arbitrarily decided where to put the
eigenvalues)
 Principled approach is to put the poles such that the closed-loop system
optimizes the cost function:

𝐽𝐿𝑄𝑅 = න 𝐱(𝑡)𝑻 𝑄𝐱(𝑡) + 𝐮(𝑡)𝑻 𝑅𝐮(𝑡) 𝑑𝑡
0
 𝐱(𝑡)𝑻 𝑄𝐱(𝑡) is called state cost, 𝐮(𝑡)𝑻 𝑅𝐮(𝑡) is called control cost
 Given a feedback law: 𝐮 𝑡 = −𝐾lqr 𝐱(𝑡), 𝐾lqr can be found precisely
 In Matlab, there is a simple one-line function lqr to do this!

USC Viterbi
School of Engineering
Department of Computer Science 27
Linear Control Systems: Observability/Controllability
 Linear control systems have the following form:
𝐱: State [Internal
𝐱ሶ = 𝐴𝐱 + 𝐵𝐮 to the process
𝐲 = 𝐶𝐱 + 𝐷𝐮 being
controlled]
𝐮: Control Input
 𝐱ሶ = 𝐴𝐱 + 𝐵𝐮:describes state evolution [actuator
 𝐲 = 𝐶𝐱 + 𝐷𝐮: describes how states are observed, 𝐷 may be command]
often 𝟎 𝐲: Output [what
sensor reads]
 Important ideas: Controllability and Observability

USC Viterbi
School of Engineering
Department of Computer Science 28
Controllability
 Can we always choose eigenvalues to find a stabilizing controller? NO!
 For 𝐱ሶ = 𝐴𝐱 + 𝐵𝐮, what if 𝐴 is unstable, and 𝐵 is 0 … 0 𝑇 ?
 No controller can ever stabilize the system
 How do we determine for a given 𝐴, 𝐵 whether there is a controller?
 Controllability:
 Can we find the condition on the system design that ensures that we can
always move the system to whichever state/output we want?
 Important question that affects which actuators we pick for the system

USC Viterbi
School of Engineering
Department of Computer Science 29
Checking Controllability
 Find controllability matrix 𝑅
 𝑅 = 𝐵 𝐴𝐵 𝐴2 𝐵 … 𝐴𝑛−1 𝐵 [𝑛 is the state-dimension]
 System is controllable if 𝑅 has full row rank. (i.e. rows are linearly
independent)
 Example:
𝑥1ሶ −1 0 2 𝑥1 1 1 𝑢
𝑥2ሶ = 2 1 0 𝑥2 + 0 1 𝑢1
2
𝑥3ሶ 1 1 3 3 𝑥 1 0

USC Viterbi
School of Engineering
Department of Computer Science 30
Checking Controllability
−1 0 2 1 1
 𝐴= 2 1 0 𝐵= 0 1
1 1 3 1 0

1 1 1 −1 7 5
 𝑅= 0 1 2 3 4 1
1 0 4 2 15 8
𝐵 𝐴𝐵 𝐴2 𝐵
 rank(R) = 3 (i.e. full rank)
 So system is controllable: uses 2 actuators (𝑢1 , 𝑢2 )!

USC Viterbi
School of Engineering
Department of Computer Science 31
Checking Controllability
−1 0 2 1 0
 𝐴= 2 1 0 𝐵= 0 0
Tip: Given matrices 𝐴, 𝐵 use Matlab command R
1 1 3 0 0 = ctrb(A, B) to find controllability Gramian.

Tip: Use 𝑟𝑎𝑛𝑘(𝑅) to find rank of 𝑅


1 0 −1 0 3 0
 𝑅= 0 0 2 0 0 0
0 0 1 0 4 0
𝐵 𝐴𝐵 𝐴2 𝐵
 rank(R) = 3 (i.e. full rank)
 So system is controllable: but uses only 1 actuator (𝑢_1)!

USC Viterbi
School of Engineering
Department of Computer Science 32
Observability
 Very rarely are all system states 𝐱 visible to the external world
 E.g. model may have internal physical states such as temperature,
pressure, object velocity: that may not be measurable by an external
observer
 Only things made available by a sensor are visible to the real world
 Observability:
 Can we reconstruct an arbitrary internal state of the system if we have only
the system outputs available?
 Important question that affects which sensors we pick for the system

USC Viterbi
School of Engineering
Department of Computer Science 33
Checking Observability
 Find observability matrix 𝑊
 𝑊 = 𝐶 𝐶𝐴 𝐶𝐴2 … 𝐶𝐴𝑛−1 𝑇 [𝑛 is the state-dimension]
 System is observable if 𝑊 has full row rank. (i.e. rows are linearly
independent)
 Example:

𝑥1ሶ −1 0 2 𝑥1 1 1 𝑢 𝑦1
𝑥1
𝑥2ሶ = 2 1 𝑥 1 0 1 1 𝑥
0 2 + 0 1 𝑢 ; 𝑦 = 2
𝑥 2 2 1 1 0 𝑥
𝑥3ሶ 1 1 3 3 1 0 3

USC Viterbi
School of Engineering
Department of Computer Science 34
Checking Observability
 Matrix 𝑊 is full rank
−1 0 2  ⇒ Pair 𝐴, 𝐶 is observable
0 1 1
 𝐴= 2 1 0 𝐶 =
1 1 0  Assuming sensors measure
1 1 3
𝑥1 , 𝑥2 , 𝑥3 independently, we
0 1 1 need three sensors
1 1 0  Assuming we have sensors to
𝐶
3 2 3 measure 𝑥2 + 𝑥3 , another to
 𝑊 = 𝐶𝐴 = , 𝑟𝑎𝑛𝑘 𝑅 = 3
2 1 1 2 measure 𝑥1 + 𝑥2 , the system
𝐶𝐴
4 5 15 uses two sensors
3 3 8

USC Viterbi
School of Engineering
Department of Computer Science 35
Checking Observability
 What if we used only one
−1 0 2 sensor that measures sum of
 𝐴= 2 1 0 𝐶 = 111 all states?
1 1 3  I.e. 𝑦 = 𝑥1 + 𝑥2 + 𝑥3 ?
 Observability matrix is not
𝐶 1 1 1 full rank! Cannot reconstruct
 𝑊 = 𝐶𝐴 = 2 2 5 , 𝑟𝑎𝑛𝑘 𝑅 = 2 some state using only one
𝐶𝐴2 7 7 19 sensor!
 Tip: use matlab command
obsv(A, C) to find 𝑊

USC Viterbi
School of Engineering
Department of Computer Science 36
How do we reconstruct internal state?
 For linear systems (with no noise), this is done with the use of state
estimators or observers
 For linear systems with noisy measurements and possible “process noise” in
the system itself we use Kalman filter: introduced later in the course

 The most popular control method in the world started without any concerns
of controllability, observability etc.
 Purpose: Tracking a given reference signal

USC Viterbi
School of Engineering
Department of Computer Science 37
PID controllers
 While previous controllers used systematic use of linear systems theory, PID
controllers are the most widely-used and most prevalent in practice (> 90%)
 Main architecture: Controller

𝐾𝑃 𝐞(t)

𝐫(𝑡) 𝐲(𝑡)
𝑡 𝐮(𝑡) 𝐱ሶ = 𝐴𝐱 + 𝐵𝐮
∑ 𝐾𝐼 න 𝐞 𝜏 𝑑𝜏
+ 0 𝐲 = 𝐶𝐱 + 𝐷𝐮

𝑑𝐞(𝑡) Plant
𝐾𝐷
𝑑𝑡

USC Viterbi
School of Engineering
Department of Computer Science 38
P-only controller
 Compute error signal 𝐞 𝑡 = 𝐫 𝑡 − 𝐲(𝑡)
 Proportional term 𝐾𝑝 𝐞 𝑡 :
 𝐾𝑝 proportional gain;
 Feedback correction proportional to error
 Cons:
 If 𝐾𝑝 is small, error can be large! [undercompensation]
 If 𝐾𝑝 is large,
 system may oscillate (i.e. unstable) [overcompensation]
 may not converge to set-point fast enough
 P-controller always has steady state error or offset error when tracking non-zero
reference signals

USC Viterbi
School of Engineering
Department of Computer Science 39
PD-controller
 Compute error signal 𝐞 𝑡 = 𝐫 𝑡 − 𝐲(𝑡)
 Derivative term 𝐾𝑑 𝐞ሶ 𝑡 :
 𝐾𝑑 derivative gain;
 Feedback proportional to how fast the error is increasing/decreasing
 Purpose:
 “Predictive” term, can reduce overshoot: if error is decreasing slowly, feedback is
slower
 Can improve tolerance to disturbances
 Disadvantages:
 Still cannot eliminate steady-state error
 High frequency disturbances can get amplified

USC Viterbi
School of Engineering
Department of Computer Science 40
PI/PID controller
𝑡
 Integral term: 𝜏 𝑑𝜏 𝐾𝐼 ‫׬‬0 𝐞
 𝐾𝐼 integral gain;
 Feedback action proportional to cumulative error over time
 If a small error persists, it will add up over time and push the system towards
eliminating this error): eliminates offset/steady-state error
 Disadvantages:
 Integral action by itself can increase instability
 (adding a “D” term can help)
 Integrator term can accumulate error and suggest corrections that are not
feasible for the actuators (integrator windup)
 Real systems “saturate” the integrator beyond a certain value

USC Viterbi
School of Engineering
Department of Computer Science 41
PID controller in practice

 Many heuristics to tune PID controllers, i.e., find values of 𝐾𝑃 , 𝐾𝐼 , 𝐾𝐷


 Several recipes to tune, usually rely on designer expertise
 E.g. Ziegler-Nichols method: increase 𝐾𝑃 till system starts oscillating with
∗ ∗ 1.2𝐾 ∗ 3
period 𝑇 (say till 𝐾𝑃 = 𝐾 ), then set 𝐾𝑃 = 0.6𝐾 , 𝐾𝐼 = , 𝐾𝐷 = 𝐾 ∗ 𝑇
𝑇 40
 Work well with linear systems or for small perturbations,

USC Viterbi
School of Engineering
Department of Computer Science 42
Measuring control performance
 Typical to excite closed-loop system
with a “step input”
 I.e. sudden change in reference set-
point
 “Step” input of (say) 2.5 at time 0
 Step Response in blue
 Peak/Overshoot (corr. undershoot)
 Settling Time/Settling Region
 Rise Time
 Peak Time
 Steady State Error
Image © from Mathworks

USC Viterbi
School of Engineering
Department of Computer Science 43
Next lecture
 Nonlinear Control

USC Viterbi
School of Engineering
Department of Computer Science 44

You might also like