0% found this document useful (0 votes)
7 views

Control Predictivo

A First Course in Predictive Control, Second Edition by J.A. Rossiter provides a comprehensive introduction to predictive control methods, highlighting the limitations of classical control techniques and the advantages of model predictive control (MPC). The book is organized into chapters that cover fundamental concepts, prediction modeling, and predictive functional control, along with practical examples and MATLAB code. It aims to facilitate learning for students and educators in the field of control engineering.

Uploaded by

socommisso
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Control Predictivo

A First Course in Predictive Control, Second Edition by J.A. Rossiter provides a comprehensive introduction to predictive control methods, highlighting the limitations of classical control techniques and the advantages of model predictive control (MPC). The book is organized into chapters that cover fundamental concepts, prediction modeling, and predictive functional control, along with practical examples and MATLAB code. It aims to facilitate learning for students and educators in the field of control engineering.

Uploaded by

socommisso
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

A First Course in

Predictive Control
Second Edition
A First Course in
Predictive Control
Second Edition

J.A. Rossiter
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2018 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed on acid-free paper
Version Date: 20180322
International Standard Book Number-13: 978-1-138-09934-0 (Hardback)
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access
www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc.
(CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization
that provides licenses and registration for a variety of users. For organizations that have been granted
a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and
are used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
In the hope that students and lecturing staff will
find this easy to learn from and in the spirit of sharing,
creating a world that accepts and supports each other.
Contents

Overview and guidance for use xix

Book organisation xxi

Acknowledgements xxiii

1 Introduction and the industrial need for predictive control 1


1.1 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation and introduction . . . . . . . . . . . . . . . . . . . . . 2
1.3 Classical control assumptions . . . . . . . . . . . . . . . . . . . . 3
1.3.1 PID compensation . . . . . . . . . . . . . . . . . . . . . . 3
1.3.2 Lead and Lag compensation . . . . . . . . . . . . . . . . . 4
1.3.3 Using PID and lead/lag for SISO control . . . . . . . . . . 4
1.3.4 Classical control analysis . . . . . . . . . . . . . . . . . . 4
1.4 Examples of systems hard to control effectively with classical
methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Controlling systems with non-minimum phase zeros . . . . 5
1.4.2 Controlling systems with significant delays . . . . . . . . . 7
1.4.3 ILLUSTRATION: Impact of delay on margins and closed-
loop behaviour . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.4 Controlling systems with constraints . . . . . . . . . . . . 10
1.4.5 Controlling multivariable systems . . . . . . . . . . . . . . 12
1.4.6 Controlling open-loop unstable systems . . . . . . . . . . . 16
1.5 The potential value of prediction . . . . . . . . . . . . . . . . . . 17
1.5.1 Why is predictive control logical? . . . . . . . . . . . . . . 18
1.5.2 Potential advantages of prediction . . . . . . . . . . . . . . 19
1.6 The main components of MPC . . . . . . . . . . . . . . . . . . . 19
1.6.1 Prediction and prediction horizon . . . . . . . . . . . . . . 20
1.6.2 Why is prediction important? . . . . . . . . . . . . . . . . . 20
1.6.3 Receding horizon . . . . . . . . . . . . . . . . . . . . . . 22
1.6.4 Predictions are based on a model . . . . . . . . . . . . . . 23
1.6.5 Performance indices . . . . . . . . . . . . . . . . . . . . . 25
1.6.6 Degrees of freedom in the predictions or prediction class . . 27
1.6.7 Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.8 Constraint handling . . . . . . . . . . . . . . . . . . . . . 28
1.6.9 Multivariable and interactive systems . . . . . . . . . . . . 30

vii
viii Contents

1.6.10 Systematic use of future demands . . . . . . . . . . . . . . 31


1.7 MPC philosophy in summary . . . . . . . . . . . . . . . . . . . . 31
1.8 MATLAB files from this chapter . . . . . . . . . . . . . . . . . . . 33
1.9 Reminder of book organisation . . . . . . . . . . . . . . . . . . . 33

2 Prediction in model predictive control 35


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . 36
2.2.1 Typical learning outcomes for an examination assessment . 37
2.2.2 Typical learning outcomes for an assignment/coursework . . 37
2.3 General format of prediction modelling . . . . . . . . . . . . . . . 37
2.3.1 Notation for vectors of past and future values . . . . . . . . 38
2.3.2 Format of general prediction equation . . . . . . . . . . . . 38
2.3.3 Double subscript notation for predictions . . . . . . . . . . 38
2.4 Prediction with state space models . . . . . . . . . . . . . . . . . 39
2.4.1 Prediction by iterating the system model . . . . . . . . . . 39
2.4.2 Predictions in matrix notation . . . . . . . . . . . . . . . . 40
2.4.3 Unbiased prediction with state space models . . . . . . . . 43
2.4.4 The importance of unbiased prediction and deviation
variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.4.5 State space predictions with deviation variables . . . . . . . 46
2.4.6 Predictions with state space models and input increments . 48
2.5 Prediction with transfer function models – matrix methods . . . . . 49
2.5.1 Ensuring unbiased prediction with transfer function models 50
2.5.2 Prediction for a CARIMA model with T (z) = 1: the SISO
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.5.3 Prediction with a CARIMA model and T = 1: the MIMO
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.5.4 Prediction equations with T (z) 6= 1: the SISO case . . . . . 57
2.5.4.1 Summary of the key steps in computing prediction
equations with a T-filter . . . . . . . . . . . . . . 57
2.5.4.2 Forming the prediction equations with a T-filter
beginning from predictions (2.59) . . . . . . . . 58
2.6 Using recursion to find prediction matrices for CARIMA models . 60
2.7 Prediction with independent models . . . . . . . . . . . . . . . . . 63
2.7.1 Structure of an independent model and predictions . . . . . 64
2.7.2 Removing prediction bias with an independent model . . . 65
2.7.3 Independent model prediction via partial fractions for SISO
systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.7.3.1 Prediction for SISO systems with one pole . . . . 66
2.7.3.2 PFC for higher-order models having real roots . . 67
2.8 Prediction with FIR models . . . . . . . . . . . . . . . . . . . . . 68
2.8.1 Impulse response models and predictions . . . . . . . . . . 69
2.8.2 Prediction with step response models . . . . . . . . . . . . 71
2.9 Closed-loop prediction . . . . . . . . . . . . . . . . . . . . . . . . 73
Contents ix

2.9.1
The need for numerically robust prediction with open-loop
unstable plant . . . . . . . . . . . . . . . . . . . . . . . . 73
2.9.2 Pseudo-closed-loop prediction . . . . . . . . . . . . . . . . 74
2.9.3 Illustration of prediction structures with the OLP and CLP . 76
2.9.4 Basic CLP predictions for state space models . . . . . . . . 77
2.9.5 Unbiased closed-loop prediction with autonomous models . 78
2.9.6 CLP predictions with transfer function models . . . . . . . 79
2.10 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.11 Summary of MATLAB code supporting prediction . . . . . . . . . 82

3 Predictive functional control 83


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.2 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . 85
3.3 Basic concepts in PFC . . . . . . . . . . . . . . . . . . . . . . . . 86
3.3.1 PFC philosophy . . . . . . . . . . . . . . . . . . . . . . . . 86
3.3.2 Desired responses . . . . . . . . . . . . . . . . . . . . . . 87
3.3.3 Combining predicted behaviour with desired behaviour: the
coincidence point . . . . . . . . . . . . . . . . . . . . . . . 88
3.3.4 Basic mathematical definition of a simple PFC law . . . . . 89
3.3.5 Alternative PFC formulation using independent models . . . 89
3.3.6 Integral action within PFC . . . . . . . . . . . . . . . . . . 91
3.3.7 Coping with large dead-times in PFC . . . . . . . . . . . . 91
3.3.8 Coping with constraints in PFC . . . . . . . . . . . . . . . 93
3.3.9 Open-loop unstable systems . . . . . . . . . . . . . . . . . 94
3.4 PFC with first-order models . . . . . . . . . . . . . . . . . . . . . 95
3.4.1 Analysis of PFC for a first-order system . . . . . . . . . . . 95
3.4.2 Numerical examples of PFC on first-order systems . . . . . 97
3.4.2.1 Dependence on choice of desired closed-loop pole
λ with ny = 1 . . . . . . . . . . . . . . . . . . . 97
3.4.2.2 Dependence on choice of coincidence horizon ny
with fixed λ . . . . . . . . . . . . . . . . . . . . 97
3.4.2.3 Effectiveness at handling plants with delays . . . 98
3.4.2.4 Effectiveness at handling plants with uncertainty
and delays . . . . . . . . . . . . . . . . . . . . . 98
3.4.2.5 Effectiveness at handling plants with uncertainty,
constraints and delays . . . . . . . . . . . . . . . 99
3.5 PFC with higher-order models . . . . . . . . . . . . . . . . . . . . 100
3.5.1 Is a coincidence horizon of 1 a good choice in general? . . 101
3.5.1.1 Nominal performance analysis with ny = 1 . . . . 102
3.5.1.2 Stability analysis with ny = 1 . . . . . . . . . . . 102
3.5.2 The efficacy of λ as a tuning parameter . . . . . . . . . . . 103
3.5.2.1 Closed-loop poles/behaviour for G1 with various
choices of ny . . . . . . . . . . . . . . . . . . . . 104
3.5.2.2 Closed-loop poles/behaviour for G3 with various
choices of ny . . . . . . . . . . . . . . . . . . . . 104
x Contents

3.5.2.3 Closed-loop poles/behaviour for G2 with various


choices of ny . . . . . . . . . . . . . . . . . . . . 105
3.5.2.4 Closed-loop poles/behaviour for G4 with various
choices of ny . . . . . . . . . . . . . . . . . . . . 105
3.5.3 Practical tuning guidance . . . . . . . . . . . . . . . . . . 106
3.5.3.1 Closed-loop poles/behaviour for G5 with various
choices of ny . . . . . . . . . . . . . . . . . . . . 106
3.5.3.2 Mean-level approach to tuning . . . . . . . . . . 106
3.5.3.3 Intuitive choices of coincidence horizon to ensure
good behaviour . . . . . . . . . . . . . . . . . . 107
3.5.3.4 Coincidence horizon of ny = 1 will not work in
general . . . . . . . . . . . . . . . . . . . . . . . 108
3.5.3.5 Small coincidence horizons often imply over-
actuation . . . . . . . . . . . . . . . . . . . . . . 109
3.5.3.6 Example 1 of intuitive choice of coincidence
horizon . . . . . . . . . . . . . . . . . . . . . . . 109
3.5.3.7 Example 6 of intuitive choice of coincidence
horizon . . . . . . . . . . . . . . . . . . . . . . . 110
3.6 Stability results for PFC . . . . . . . . . . . . . . . . . . . . . . . 111
3.7 PFC with ramp targets . . . . . . . . . . . . . . . . . . . . . . . . 112
3.8 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.9 MATLAB code available for readers . . . . . . . . . . . . . . . . 115

4 Predictive control – the basic algorithm 117


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.2 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . 119
4.3 Summary of main results . . . . . . . . . . . . . . . . . . . . . . 119
4.3.1 GPC control structure . . . . . . . . . . . . . . . . . . . . 120
4.3.2 Main components of an MPC law . . . . . . . . . . . . . . 120
4.4 The GPC performance index . . . . . . . . . . . . . . . . . . . . 120
4.4.1 Concepts of good and bad performance . . . . . . . . . . . 121
4.4.2 Properties of a convenient performance index . . . . . . . . 121
4.4.3 Possible components to include in a performance index . . . 122
4.4.4 Concepts of biased and unbiased performance indices . . . . 125
4.4.5 The dangers of making the performance index too simple . . 127
4.4.6 Integral action in predictive control . . . . . . . . . . . . . 128
4.4.7 Compact representation of the performance index and choice
of weights . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.5 GPC algorithm formulation for transfer function models . . . . . . 130
4.5.1 Selecting the degrees of freedom for GPC . . . . . . . . . . 130
4.5.2 Performance index for a GPC control law . . . . . . . . . . 131
4.5.3 Optimising the GPC performance index . . . . . . . . . . . 132
4.5.4 Transfer function representation of the control law . . . . . 133
4.5.5 Numerical examples for GPC with transfer function models
and MATLAB code . . . . . . . . . . . . . . . . . . . . . 134
Contents xi

4.5.6
Closed-loop transfer functions . . . . . . . . . . . . . . . . 136
4.5.7
GPC based on MFD models with a T-filter (GPCT) . . . . . 137
4.5.7.1 Why use a T-filter and conceptual thinking? . . . 137
4.5.7.2 Algebraic procedures with a T-filter . . . . . . . . 138
4.5.8 Sensitivity of GPC . . . . . . . . . . . . . . . . . . . . . . 141
4.5.8.1 Complementary sensitivity . . . . . . . . . . . . 142
4.5.8.2 Sensitivity to multiplicative uncertainty . . . . . . 142
4.5.8.3 Disturbance and noise rejection . . . . . . . . . . 143
4.5.8.4 Impact of a T-filter on sensitivity . . . . . . . . . 144
4.5.9 Analogies between PFC and GPC . . . . . . . . . . . . . . 145
4.6 GPC formulation for finite impulse response models and
Dynamic Matrix Control . . . . . . . . . . . . . . . . . . . . . . . 146
4.7 Formulation of GPC with an independent prediction model . . . . . 148
4.7.1 GPC algorithm with independent transfer function model . . 148
4.7.2 Closed-loop poles in the IM case with an MFD model . . . 151
4.8 GPC with a state space model . . . . . . . . . . . . . . . . . . . . 153
4.8.1 Simple state augmentation . . . . . . . . . . . . . . . . . . 153
4.8.1.1 Computing the predictive control law with an
augmented state space model . . . . . . . . . . . 154
4.8.1.2 Closed-loop equations and integral action . . . . 156
4.8.2 GPC using state space models with deviation variables . . . 156
4.8.2.1 GPC algorithm based on deviation variables . . . 157
4.8.2.2 Using an observer to estimate steady-state values
for the state and input . . . . . . . . . . . . . . . 160
4.8.3 Independent model GPC using a state space model . . . . . 161
4.9 Chapter summary and general comments on stability and tuning of
GPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.10 Summary of MATLAB code supporting GPC simulation . . . . . . 163
4.10.1 MATLAB code to support GPC with a state space model and
a performance index based on deviation variables . . . . . . 163
4.10.2 MATLAB code to support GPC with an MFD or CARIMA
model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.10.3 MATLAB code to support GPC using an independent model
in MFD format. . . . . . . . . . . . . . . . . . . . . . . . . 165
4.10.4 MATLAB code to support GPC with an augmented state
space model. . . . . . . . . . . . . . . . . . . . . . . . . . 165

5 Tuning GPC: good and bad choices of the horizons 167


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.2 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . 168
5.3 Poor choices lead to poor behaviour . . . . . . . . . . . . . . . . . 169
5.4 Concepts of well-posed and ill-posed optimisations . . . . . . . . . 171
5.4.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.4.2 What is an ill-posed objective/optimisation? . . . . . . . . . 174
5.4.2.1 Parameterisation of degrees of freedom . . . . . . 174
xii Contents

5.4.2.2 Choices of performance index . . . . . . . . . . . 175


5.4.2.3 Consistency between predictions and eventual
behaviour . . . . . . . . . . . . . . . . . . . . . 176
5.4.2.4 Summary of how to avoid ill-posed optimisations 178
5.5 Illustrative simulations to show impact of different
parameter choices on GPC behaviour . . . . . . . . . . . . . . . . 179
5.5.1 Numerical examples for studies on GPC tuning . . . . . . . 179
5.5.2 The impact of low output horizons on GPC performance . . 179
5.5.3 The impact of high output horizons on GPC performance . . 182
5.5.3.1 GPC illustrations with ny large and nu = 1 . . . . 182
5.5.3.2 When would a designer use large ny and nu = 1? . 185
5.5.3.3 Remarks on the efficacy of DMC . . . . . . . . . 187
5.5.4 Effect on GPC performance of changing nu and prediction
consistency . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.5.5 The impact of the input weight λ on the ideal horizons and
prediction consistency . . . . . . . . . . . . . . . . . . . . 191
5.5.6 Summary insights on choices of horizons and weights for
GPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.6 Systematic guidance for “tuning” GPC . . . . . . . . . . . . . . . 195
5.6.1 Proposed offline tuning method . . . . . . . . . . . . . . . 196
5.6.2 Illustrations of efficacy of tuning guidance . . . . . . . . . . 196
5.7 MIMO examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.8 Dealing with open-loop unstable systems . . . . . . . . . . . . . . 201
5.8.1 Illustration of the effects of increasing the output horizon
with open-loop unstable systems . . . . . . . . . . . . . . . 202
5.8.2 Further comments on open-loop unstable systems . . . . . . 203
5.8.3 Recommendations for open-loop unstable processes . . . . 204
5.9 Chapter summary: guidelines for tuning GPC . . . . . . . . . . . . 205
5.10 Useful MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . 206

6 Dual-mode MPC (OMPC and SOMPC) and stability guarantees 207


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
6.2 Guidance for the lecturer . . . . . . . . . . . . . . . . . . . . . . . 209
6.3 Foundation of a well-posed MPC algorithm . . . . . . . . . . . . . 209
6.3.1 Definition of the tail and recursive consistency . . . . . . . 210
6.3.2 Infinite horizons and the tail imply closed-loop stability . . 212
6.3.3 Only the output horizon needs to be infinite . . . . . . . . . 213
6.3.4 Stability proofs with constraints . . . . . . . . . . . . . . . 214
6.3.5 Are infinite horizons impractical? . . . . . . . . . . . . . . 215
6.4 Dual-mode MPC – an overview . . . . . . . . . . . . . . . . . . . 216
6.4.1 What is dual-mode control in the context of MPC? . . . . . 216
6.4.2 The structure/parameterisation of dual-mode predictions . . 217
6.4.3 Overview of MPC dual-mode algorithms: Suboptimal MPC
(SOMPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.4.4 Is SOMPC guaranteed stabilising? . . . . . . . . . . . . . . 219
Contents xiii

6.4.5 Why is SOMPC suboptimal? . . . . . . . . . . . . . . . . . 220


6.4.6 Improving optimality of SOMPC, the OMPC algorithm . . . 221
6.5 Algebraic derivations for dual-mode MPC . . . . . . . . . . . . . . 222
6.5.1 The cost function for linear predictions over infinite horizons 223
6.5.2 Forming the cost function for dual-mode predictions . . . . 224
6.5.3 Computing the SOMPC control law . . . . . . . . . . . . . 225
6.5.4 Definition of terminal mode control law via optimal control 225
6.5.5 SOMPC reduces to optimal control in the unconstrained case 226
6.5.6 Remarks on stability and performance of SOMPC/OMPC . 227
6.6 Closed-loop paradigm implementations of OMPC . . . . . . . . . 228
6.6.1 Overview of the CLP concept . . . . . . . . . . . . . . . . 229
6.6.2 The SOMPC/OMPC law with the closed-loop paradigm . . 231
6.6.3 Properties of OMPC/SOMPC solved using the CLP . . . . . 233
6.6.3.1 Open-loop unstable systems . . . . . . . . . . . . 233
6.6.3.2 Conditioning and structure of performance index J 233
6.6.4 Using autonomous models in OMPC . . . . . . . . . . . . 234
6.6.4.1 Forming the predicted cost with an autonomous
model . . . . . . . . . . . . . . . . . . . . . . . 235
6.6.4.2 Using autonomous models to support the
definition of constraint matrices . . . . . . . . . . 236
6.6.4.3 Forming the predicted cost with an expanded
autonomous model . . . . . . . . . . . . . . . . . 236
6.6.5 Advantages and disadvantages of the CLP over the open-
loop predictions . . . . . . . . . . . . . . . . . . . . . . . . 237
6.7 Numerical illustrations of OMPC and SOMPC . . . . . . . . . . . 238
6.7.1 Illustrations of the impact of Q, R on OMPC . . . . . . . . . 238
6.7.2 Structure of cost function matrices for OMPC . . . . . . . . 239
6.7.3 Structure of cost function matrices for SOMPC . . . . . . . 240
6.7.4 Demonstration that the parameters of KSOMPC vary with nc . 241
6.7.5 Demonstration that Jk is a Lyapunov function with OMPC
and SOMPC . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.7.6 Demonstration that with SOMPC the optimum decision
changes each sample . . . . . . . . . . . . . . . . . . . . . 243
6.7.7 List of available illustrations . . . . . . . . . . . . . . . . . 244
6.8 Motivation for SOMPC: Different choices for mode 2 of dual-mode
control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
6.8.1 Dead-beat terminal conditions . . . . . . . . . . . . . . . . 247
6.8.2 A zero terminal control law . . . . . . . . . . . . . . . . . 248
6.8.3 Input parameterisations to eliminate unstable modes in the
prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.8.4 Reflections and comparison of GPC and OMPC . . . . . . . 249
6.8.4.1 DMC/GPC is practically effective . . . . . . . . . 250
6.8.4.2 The potential role of dual-mode algorithms . . . . 250
6.9 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.10 MATLAB files in support of this chapter . . . . . . . . . . . . . . 251
xiv Contents

6.10.1 Files in support of OMPC/SOMPC simulations . . . . . . . 251


6.10.2 MATLAB files producing numerical illustrations from
Section 6.7 . . . . . . . . . . . . . . . . . . . . . . . . . . 252

7 Constraint handling in GPC/finite horizon predictive control 253


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
7.2 Guidance for the lecturer . . . . . . . . . . . . . . . . . . . . . . . 254
7.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
7.3.1 Definition of a saturation policy and limitations . . . . . . . 255
7.3.2 Limitations of a saturation policy . . . . . . . . . . . . . . 255
7.3.3 Summary of constraint handling needs . . . . . . . . . . . . 258
7.4 Description of typical constraints and linking to GPC . . . . . . . . 259
7.4.1 Input rate constraints with a finite horizon (GPC predictions) 260
7.4.2 Input constraints with a finite horizon (GPC predictions) . . 261
7.4.3 Output constraints with a finite horizon (GPC predictions) . 262
7.4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
7.4.5 Using MATLAB to build constraint inequalities . . . . . . . 264
7.5 Constrained GPC . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
7.5.1 Quadratic programming in GPC . . . . . . . . . . . . . . . 267
7.5.2 Implementing constrained GPC in practice . . . . . . . . . 268
7.5.3 Stability of constrained GPC . . . . . . . . . . . . . . . . . 268
7.5.4 Illustrations of constrained GPC with MATLAB . . . . . . 269
7.5.4.1 ILLUSTRATION: Constrained GPC with only
input constraints . . . . . . . . . . . . . . . . . . 269
7.5.4.2 ILLUSTRATION: Constrained GPC with output
constraints . . . . . . . . . . . . . . . . . . . . . 270
7.5.4.3 ILLUSTRATION: Constrained GPC with a
T-filter . . . . . . . . . . . . . . . . . . . . . . . 272
7.5.4.4 ILLUSTRATION: Constrained GPC with an
independent model . . . . . . . . . . . . . . . . . 272
7.6 Understanding a quadratic programming optimisation . . . . . . . 273
7.6.1 Generic quadratic function optimisation . . . . . . . . . . . 273
7.6.2 Impact of linear constraints on minimum of a quadratic
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.6.2.1 ILLUSTRATION: Contour curves, constraints
and minima for a quadratic function (7.32) . . . . 274
7.6.2.2 ILLUSTRATION: Impact of change of linear term
in quadratic function (7.34) on constrained mini-
mum . . . . . . . . . . . . . . . . . . . . . . . . 276
7.6.3 Illustrations of MATLAB for solving QP optimisations . . . 277
7.6.4 Constrained optimals may be counter-intuitive: saturation
control can be poor . . . . . . . . . . . . . . . . . . . . . . 278
7.7 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.8 MATLAB code supporting constrained MPC simulation . . . . . . 280
7.8.1 Miscellaneous MATLAB files used in illustrations . . . . . 280
Contents xv

7.8.2 MATLAB code for supporting GPC based on an MFD model 281
7.8.3 MATLAB code for supporting GPC based on an independent
model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282

8 Constraint handling in dual-mode predictive control 283


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
8.2 Guidance for the lecturer/reader and introduction . . . . . . . . . . 285
8.3 Background and assumptions . . . . . . . . . . . . . . . . . . . . 285
8.4 Description of simple or finite horizon constraint handling approach
for dual-mode algorithms . . . . . . . . . . . . . . . . . . . . . . 287
8.4.1 Simple illustration of using finite horizon inequalities or
constraint handling with dual-mode predictions . . . . . . . 287
8.4.2 Using finite horizon inequalities or constraint handling with
unbiased dual-mode predictions . . . . . . . . . . . . . . . 288
8.4.3 MATLAB code for constraint inequalities with dual-mode
predictions and SOMPC/OMPC simulation examples . . . . 290
8.4.3.1 ILLUSTRATION: Constraint handling in SISO
OMPC . . . . . . . . . . . . . . . . . . . . . . . 290
8.4.3.2 ILLUSTRATION: Constraint handling in MIMO
OMPC . . . . . . . . . . . . . . . . . . . . . . . 291
8.4.4 Remarks on using finite horizon inequalities in dual-mode
MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
8.4.5 Block diagram representation of constraint handling in dual-
mode predictive control . . . . . . . . . . . . . . . . . . . 292
8.5 Concepts of redundancy, recursive feasibility, admissible sets and
autonomous models . . . . . . . . . . . . . . . . . . . . . . . . . 293
8.5.1 Identifying redundant constraints in a set of inequalities . . 294
8.5.2 The need for recursive feasibility . . . . . . . . . . . . . . . 294
8.5.3 Links between recursive feasibility and closed-loop stability 296
8.5.4 Terminal regions and impacts . . . . . . . . . . . . . . . . 297
8.5.5 Autonomous model formulations for dual-mode predictions 297
8.5.5.1 Unbiased prediction and constraints with
autonomous models . . . . . . . . . . . . . . . . 298
8.5.5.2 Constraint inequalities and autonomous models
with deviation variables . . . . . . . . . . . . . . 299
8.5.5.3 Core insights with dual-mode predictions and
constraint handling . . . . . . . . . . . . . . . . 300
8.5.6 Constraint handling with maximal admissible sets and
invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
8.5.6.1 Maximal admissible set . . . . . . . . . . . . . . 301
8.5.6.2 Concepts of invariance and links to the MAS . . . 306
8.5.6.3 Efficient definition of terminal sets using a MAS . 307
8.5.6.4 Maximal controlled admissible set . . . . . . . . 308
8.5.6.5 Properties of invariant sets . . . . . . . . . . . . . 310
xvi Contents

8.6 The OMPC/SOMPC algorithm using an MCAS to represent


constraint handling . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.6.1 Constraint inequalities for OMPC using an MCAS . . . . . 311
8.6.2 The proposed OMPC/SOMPC algorithm . . . . . . . . . . 313
8.6.3 Illustrations of the dual-mode prediction structure in OMPC/
SOMPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
8.7 Numerical examples of the SOMPC/OMPMC approach with
constraint handling . . . . . . . . . . . . . . . . . . . . . . . . . . 314
8.7.1 Constructing invariant constraint sets/MCAS using
MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
8.7.2 Closed-loop simulations of constrained OMPC/SOMPC
using ompc_simulate_constraints.m: the regulation case . . 315
8.7.2.1 ILLUSTRATION: ompc_constraints_
example1.m . . . . . . . . . . . . . . . . . . . . 315
8.7.2.2 ILLUSTRATION: ompc_constraints_
example2.m . . . . . . . . . . . . . . . . . . . . 316
8.7.2.3 ILLUSTRATION: ompc_constraints_
example3.m . . . . . . . . . . . . . . . . . . . . 316
8.7.2.4 ILLUSTRATION: For OMPC, the optimal choice
of input perturbation ck does not change . . . . . 317
8.7.3 Closed-loop simulations of constrained OMPC/SOMPC
using ompc_simulate_constraintsb.m: the tracking case . . . 318
8.7.3.1 ILLUSTRATION: ompc_constraints_
example4.m . . . . . . . . . . . . . . . . . . . . 318
8.7.3.2 ILLUSTRATION: ompc_constraints_
example5.m . . . . . . . . . . . . . . . . . . . . 319
8.7.4 More efficient code for the tracking case and disturbance
uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.8 Discussion on the impact of cost function and algorithm selection on
feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
8.9 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
8.10 Summary of MATLAB code supporting constrained OMPC
simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.10.1 Code for supporting SOMPC/OMPC with constraint
handling over a pre-specified finite horizon . . . . . . . . . 325
8.10.2 Code for supporting SOMPC/OMPC with constraint
handling using maximal admissible sets . . . . . . . . . . . 326

9 Conclusions 329
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
9.2 Design choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
9.2.1 Scenario and funding . . . . . . . . . . . . . . . . . . . . . 330
9.2.2 Effective tuning . . . . . . . . . . . . . . . . . . . . . . . . 330
9.2.3 Constraint handling and feasibility . . . . . . . . . . . . . . 331
9.2.4 Coding complexity . . . . . . . . . . . . . . . . . . . . . . 332
Contents xvii

9.2.5 Sensitivity to uncertainty . . . . . . . . . . . . . . . . . . . 333


9.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334

A Tutorial and exam questions and case studies 335


A.1 Typical exam and tutorial questions with minimal computation . . . 336
A.2 Generic questions . . . . . . . . . . . . . . . . . . . . . . . . . . 342
A.3 Case study based questions for use with assignments . . . . . . . . 343
A.3.1 SISO example case studies . . . . . . . . . . . . . . . . . . 344
A.3.2 MIMO example case studies . . . . . . . . . . . . . . . . . 346

B Further reading 349


B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
B.2 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . 350
B.3 Simple variations on the basic algorithm . . . . . . . . . . . . . . . 350
B.3.1 Alternatives to the 2-norm in the performance index . . . . 350
B.3.2 Alternative parameterisations of the degrees of freedom . . 351
B.4 Parametric approaches to solving quadratic programming . . . . . 352
B.4.1 Strengths and weaknesses of parametric solutions in brief . 352
B.4.2 Outline of a typical parametric solution . . . . . . . . . . . 353
B.5 Prediction mismatch and the link to feedforward design in MPC . . 355
B.5.1 Feedforward definition in MPC . . . . . . . . . . . . . . . 356
B.5.2 Mismatch between predictions and actual behaviour . . . . 356
B.6 Robust MPC: ensuring feasibility in the presence of uncertainty . . 358
B.6.1 Constraint softening: hard and soft constraints . . . . . . . . 360
B.6.2 Back off and borders . . . . . . . . . . . . . . . . . . . . . 361
B.7 Invariant sets and predictive control . . . . . . . . . . . . . . . . . 363
B.7.1 Link between invariance and stability . . . . . . . . . . . . 363
B.7.2 Ellipsoidal invariant sets . . . . . . . . . . . . . . . . . . . 364
B.7.3 Maximal volume ellipsoidal sets for constraint satisfaction . 365
B.7.4 Invariance in the presence of uncertainty . . . . . . . . . . . 366
B.7.4.1 Disturbance uncertainty and tube MPC . . . . . . 367
B.7.4.2 Parameter uncertainty and ellipsoidal methods . . 368
B.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370

C Notation, models and useful background 371


C.1 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . 371
C.2 Notation for linear models . . . . . . . . . . . . . . . . . . . . . . 372
C.2.1 State space models . . . . . . . . . . . . . . . . . . . . . . 372
C.2.2 Transfer function models single-input-single-output and
multi-input-multi-output . . . . . . . . . . . . . . . . . . . 373
C.2.3 Author’s MATLAB notation for SISO transfer function and
MFD models . . . . . . . . . . . . . . . . . . . . . . . . . 374
C.2.4 Equivalence between difference equation format and vector
format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
C.2.5 Step response models . . . . . . . . . . . . . . . . . . . . 376
xviii Contents

C.2.6 Sample rates . . . . . . . . . . . . . . . . . . . . . . . . . 376


C.2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
C.3 Minimisation of functions of many variables . . . . . . . . . . . . 378
C.3.1 Gradient operation and quadratic functions . . . . . . . . . 378
C.3.2 Finding the minimum of quadratic functions of many
variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
C.4 Common notation . . . . . . . . . . . . . . . . . . . . . . . . . . 380

References 383

Index 397
Overview and guidance for use

The main aim is to create a focussed and affordable textbook which is suitable for a
single or beginners course on model predictive control (MPC). As a consequence:
1. In the early parts of each chapter, an attempt is made not to dwell too
much on the mathematical details and proofs of interest to researchers
but not to a typical student and instead focus as much as possible on the
concepts and understanding needed to apply the method effectively.
2. Many books and resources on MPC cover the basic concepts far too
briefly and focus on more advanced issues. This book does the opposite
and explains the basics slowly and with numerous examples.
3. Numerous illustrative examples are included throughout and also prob-
lems which encourage readers to engage with and understand the con-
cepts.
4. The topic coverage is deliberately limited to a range considered sufficient
for taught modules.
5. Chapters are supported by MATLAB R files which are available on the
open web (or with the book) so that readers can implement basic designs
on-linear models. Numerous student problems encourage readers to learn
by doing/designing.
6. In some cases short videos are available which support key parts of this
book. [http://controleducation.group.shef.ac.uk/indexwebbook.html]
7. Chapters include guidance for the lecturer to help highlight which sec-
tions can be used for a taught module and which could be considered
broadening but non-essential.
Insight is given in a non-theoretical way and there are a number of summary
boxes to give a quick picture of the key results without the need to read through the
detailed explanation. There is a strong focus on the philosophy of predictive control
answering the questions, ‘why?’ and ‘does it help me?’ The basic concepts are in-
troduced and then these are developed to fit different purposes: for instance, how to
model, to predict, to tune, to handle constraints, to ensure feasibility, to guarantee
stability and to consider what options there are with regard to models, algorithms,
complexity versus performance, and so forth.
Research students who want to study predictive control in more depth are advised
to make use of the research literature which is very extensive, but even for them I
hope they find the focus on concepts in this book will be an invaluable foundation.

xix
xx Overview and guidance for use

About the author


Dr. Rossiter has been researching predictive control since the late 1980s and he has
published over 300 articles in journals and conferences on the topic. His particular
contributions have focused on stability, feasibility and computational simplicity. He
also has a parallel interest in developing good practice in university education. He
has a Bachelor’s degree and a Doctorate from the University of Oxford. He spent
nine years as a lecturer at Loughborough University and is currently a reader at:

The University of Sheffield


Department of Automatic Control and Systems Engineering
Mappin Street
Sheffield, S1 3JD
UK
email: [email protected]

Websites:
http://controleducation.group.shef.ac.uk/indexwebbook.html
https://www.sheffield.ac.uk/acse/staff/jar
https://www.youtube.com/channel/UCMBXZxd-j6VqrynykO1dURw

Assumptions about the reader


It is assumed that readers have taken at least a first course in control and thus are
familiar with concepts such as system behaviours, Laplace transforms, z-transforms,
state space models and simple closed-loop analysis. The motivation aspects of Chap-
ter 1 in particular are premised on a reader’s ability to reflect on the efficacy of basic
control design techniques. Otherwise, most material is introduced as required.

MATLAB R is a registered trademark of The Mathworks Inc. For product informa-


tion please contact; The Mathworks Inc., 3 Apple Hill Drive, Natick, MA, 01760
2098, USA. web: www.mathworks.com
Book organisation

Chapter 1: Gives basic motivation and background. This chapter explores, fairly
concisely, control problems where classical approaches are difficult to apply or
obviously suboptimal. It then gives some insight into how prediction forms a
logical foundation with which to solve many such problems and the key compo-
nents required to form a systematic control law.
Chapter 2: Considers how predictions are formed for a number of different model
types. Predictions are a main building block of MPC and thus this is a core skill
users will need. Also explains the concept of unbiased prediction and why this
is important.
Chapter 3: Introduces a very simple to implement and widely applied MPC law,
that is predictive functional control (PFC). Gives numerous illustrative examples
to help readers understand tuning, where PFC is a plausible solution and where
a more expensive MPC approach is more appropriate.
Chapter 4: Introduces the most common MPC performance index/optimisation
used in industrial applications and shows how these are combined with a pre-
diction model to form an effective control law.
Chapter 5: Gives insights and guidance on tuning finite horizon MPC laws. How
do I ensure that I get sensible answers and well-posed decision making? What
is the impact of uncertainty?
Chapter 6: Considers the stability of MPC and shows how a more systematic anal-
ysis of MPC suggests the use of so-called dual-mode predictions and infinite
horizons as these give much stronger performance and stability assurances. In-
troduces the algebra behind these types of MPC approaches.
Chapter 7: Considers constraint handling and why this is an important part of prac-
tical MPC algorithms. Shows how constraints are embedded systematically into
finite horizon MPC algorithms.
Chapter 8: Considers constraint handling with infinite horizon algorithms. It is
noted that while such approaches have more rigour, they are also more compli-
cated to implement and moreover are more likely to encounter feasibility chal-
lenges.
Chapter 9: Gives a concise conclusion and some indication of how a user might
both choose from and design with the variety of MPC approaches available.

xxi
xxii Book organisation

Appendix A: Provides extensive examples and guidance on tutorial questions, ex-


amination questions and assignments. Also includes some case studies that can
be used as a basis for assignments. Outline solutions are available from the pub-
lisher to staff adopting the book.
Appendix B: Gives a brief taster for a number of issues deserving further study.
Parametric methods are a tool for moving the online optimisation computa-
tions to offline calculations, thus improving transparency of the control law.
It is demonstrated that feedforward information can easily be misused with a
careless MPC design. There is also some discussion of common methods for
handling uncertainty.
Appendix C: Summarises basic notation and common background used throughout
the book.
MATLAB: Examples are supported throughout by MATLAB simulations; all the
MATLAB files for this are available to the reader so they can reproduce and
modify the scenarios for themselves. Sections at the end of each chapter sum-
marise the MATLAB code available on an open website; also available from the
publisher’s website.
http://controleducation.group.shef.ac.uk/htmlformpc/introtoMPCbook.html
STUDENT PROBLEMS: Problems for readers to test their understanding and to
direct useful private study/investigations are embedded throughout the book in
the relevant sections rather than at chapter endings. In the main these are open-
ended.
Not included: It is not the purpose of this book to write history but rather to state
what is now understood and how to use this. Other books and many journal
articles (e.g., [5, 16, 41, 94, 100, 177]) already give good historical accounts.
Also, it is not the intention to be comprehensive but rather to cover the basics
well, so many topics are excluded, a notable one being non-linear systems.
Acknowledgements

Enormous thanks to my wife Diane Rossiter who has per-


formed an incredibly diligent and careful proof reading and
editing as well as provided a number of useful critical com-
ments.

Also thanks to all the people I have collaborated with


over the years from whom I have learnt a lot which is
now embedded within this work and with apologies to the
many not listed here: Basil Kouvaritakis, Jesse Gossner,
Scott Trimboli, Luigi Chisci, Liuping Wang, Sirish Shah,
Mark Cannon, Colin Jones, Ravi Gondalaker, Jan Schur-
rmans, Lars Imsland, Jacques Richalet, Robert Haber, Bilal
Khan, Guillermo Valencia Palomo, Il Seop Choi, Wai Hou
Lio, Adham Al Sharkawi, Evans Ejegi, Shukri Dughman,
Yahya Al-Naumani, Muhammad Abdullah, Bryn Jones and
many more.

xxiii
1
Introduction and the industrial need for
predictive control

CONTENTS
1.1 Guidance for the lecturer/reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation and introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Classical control assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 PID compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.2 Lead and Lag compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.3 Using PID and lead/lag for SISO control . . . . . . . . . . . . . . . . . . . 4
1.3.4 Classical control analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Examples of systems hard to control effectively with classical
methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Controlling systems with non-minimum phase zeros . . . . . . . 5
1.4.2 Controlling systems with significant delays . . . . . . . . . . . . . . . . 7
1.4.3 ILLUSTRATION: Impact of delay on margins and
closed-loop behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.4 Controlling systems with constraints . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.5 Controlling multivariable systems . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.6 Controlling open-loop unstable systems . . . . . . . . . . . . . . . . . . . 16
1.5 The potential value of prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.1 Why is predictive control logical? . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5.2 Potential advantages of prediction . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 The main components of MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.1 Prediction and prediction horizon . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.2 Why is prediction important? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.3 Receding horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.6.4 Predictions are based on a model . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.6.5 Performance indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6.6 Degrees of freedom in the predictions or prediction class . . . 27
1.6.7 Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.8 Constraint handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.9 Multivariable and interactive systems . . . . . . . . . . . . . . . . . . . . . . 30
1.6.10 Systematic use of future demands . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.7 MPC philosophy in summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.8 MATLAB files from this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

1
2 A first course in predictive control: 2nd Edition

1.9 Reminder of book organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

1.1 Guidance for the lecturer/reader


This chapter is intended primarily as motivational and background material which
need not be part of the assessment of a typical module on predictive control. As
such, it is largely non-mathematical and readers should not struggle with reading any
parts of it.
Nevertheless, this content could be assessed through an essay/bookwork type
question within an exam, or if desired, aspects of this content could be included
through case studies within an assignment.

1.2 Motivation and introduction


A useful starting point for any book is to motivate the reader: why do I need this book
or topic? Consequently this chapter will begin by demonstrating the numerous sce-
narios where classical control approaches are inadequate. Not too much time is spent
on this as this is motivational, but hopefully enough for the reader to be convinced
that there are a significant number of real control problems where better approaches
are required.
Having given some motivation, next this chapter gives some insight into how
humans deal with such challenging control problems and demonstrates the underly-
ing concepts, often prediction, which ultimately form the main building blocks of
predictive control.
Predictive control is very widely implemented in industry and hence a key ques-
tion to ask is why? In fact, more specifically, readers will want to know when predic-
tive control is a logical choice from the many control approaches available to them.
A key point to note is that predictive control is an approach, not a specific algorithm.
The user needs to tailor the approach to their specific needs in order to produce an
effective control law.
It is more important that readers understand key concepts than specific algo-
rithms.
1. Why is it done this way?
2. What is the impact of uncertainty?
3. How does this change with constraints?
4. What are the tuning parameters and how can I use them? Which
choices are poor and why?
5. Etc.
Introduction and the industrial need for predictive control 3

Consequently, although this book includes technical details, the underlying fo-
cus is on readers understanding the concepts and thus how to ensure the algorithm
and tuning they choose is likely to lead to an effective control law for their specific
context.
Section 1.3 will give a concise review of some classical control approaches be-
fore Section 1.4 demonstrates a number of scenarios where such approaches are dif-
ficult to tune effectively, or suboptimal. Section 1.5 then introduces arguments for a
predictive control approach, followed by Section 1.6 which uses analysis of human
behaviour to set out solid principles before we move into mathematical detail. The
overall MPC philosophy is summarised in Section 1.7 followed by Section 1.9 which
gives a concise summary of the structure of the book.

1.3 Classical control assumptions


It is not the purpose of this book to teach classical control or to give a comprehensive
view of the control techniques used in industry. Nevertheless, it is useful to begin
from a brief summary of the sorts of techniques which are commonly used (e.g.,
[32, 108]). For convenience and clarity, these will be presented in the simplest forms.

1.3.1 PID compensation


PID or proportional, integral and derivative control laws are the most commonly
adopted structure in industry. Their ubiquity and success is linked to the three pa-
rameters being intuitive feedback parameters and simple to tune for many cases.
KI
K(s) = K p + + Kd s (1.1)
s
1. Proportional or K p : The magnitude of the control action is proportional to
the size of the error. As the proportional is increased, the response to error
becomes more aggressive leading to faster responses. If the proportional
is too small, the response to error is very slow. Usually, at least for systems
with simple dynamics, a proportional exists which gives the right balance
between speed of response and input activity.
2. Integral or KI : Proportional alone cannot enable output tracking as, in
steady-state, the output of the proportional is only non-zero if the track-
ing error is non-zero. Integration of the error is an obvious (also human
based strategy) and simple mechanism to enable a non-zero steady-state
controller output, even when the steady-state error is zero. As with pro-
portional, KI must not be too large or it will induce oscillation and not too
small or one will get very slow convergence.
3. Derivative or Kd : An obvious component to detect the rate at which the er-
ror is reducing and change the input accordingly - if the error is reducing
4 A first course in predictive control: 2nd Edition

too fast it is highly likely that the control is too aggressive and a reduction
in input is needed to avoid overshoot and oscillation. However, the deriva-
tive is often selected to be zero as it has a high gain at high frequency and
thus can accentuate noise and lead to input chatter. Such discussions are
beyond the remit of this book.
Several tuning rules for PID are given in the literature but actually, for a typical
system, one could arrive at close to optimum values with very little trial and error
using a simulation package.

1.3.2 Lead and Lag compensation


To some extent Lead and Lag (e.g., Equation (1.2)) discussions can be subsumed
into PID compensation. A lead Klead acts a little like a derivative, but with lower
sensitivity to high frequency noise and thus can be used to reduce oscillation and
overshoot. A Lag Klag acts a little like an integral in that it increases low frequency
gain, but this can be at the expense of bandwidth (speed of response). No more will
be said here.
s+w s + w/α
Klag = K ; Klead = K ; α >1 (1.2)
s + w/α s+w

1.3.3 Using PID and lead/lag for SISO control


Many real systems have relatively simple dynamics and there is one very important
observation that is used by control practitioners and theoreticians alike.
For single input single output (SISO) systems, if the system dynamic can be well
approximated by a second-order system which is not noticeably under-damped, then
the control you can achieve from a PID compensator is usually quite close to the best
you can get from optimal control or any other advanced strategy. In other words, you
need a good reason not to use a PID type of approach!
Of course this begs the question, when would you deviate from advocating PID
and hence the next few sections give some examples.

1.3.4 Classical control analysis


Some parts of this chapter assume that readers are familiar with analysis techniques
such as root-loci, Nyquist diagrams and the like. We will not use these to any great
depth but readers will find it useful to undertake a quick revision of those topics
if necessary. Suitable materials are well covered in all mainstream classical control
textbooks and also there are some videos and summary resources on the author’s
website [http://controleducation.group.shef.ac.uk/indexwebbook.html].
Introduction and the industrial need for predictive control 5

r e u y
+ M(s) G(s)

FIGURE 1.1
Feedback loop structure.

Summary: Classical control is not part of this book, but some understand-
ing of the basics will help readers to understand the motivation for adopting
predictive control.

1.4 Examples of systems hard to control effectively with classical


methods
This section will assume a simple feedback structure as given in Figure 1.1 where M
represents a compensator and G the process, r is the target, u the system input, e the
tracking error and y the system output.

1.4.1 Controlling systems with non-minimum phase zeros


Non-minimum phase systems are quite common.
• One of the most well-known non-minimum phase responses is boiler drum level.
Steam drums in large boilers are a saturated mixture of liquid and vapour. When
steam out of the drum increases due to demand, level control must increase feedwa-
ter flow, which, initially, cools and collapses bubbles in the saturated mixture in the
drum causing level to temporarily drop (shrink) before rising, with also an inverse
response going the other direction (decreasing feedwater causes temporary level
increase, or swell). The phenomenon is sometimes referred to as “shrink/swell”.

• A boat turning right or left: a boat steered to the left (port) will initially move
starboard (right) and then swing the boat to the left (port) because the initial forces
on the rudder are opposite to the direction of the forces on the prow of the boat.
This is why you MUST have a tugboat to move large ships away from docks, since
the ship must initially swing into the dock to turn away from dock after an initial
6 A first course in predictive control: 2nd Edition

transient. The same phenomenon happens with rear-steered passenger vehicles (or
front-steered vehicles that are backing up).
• Balancing a pole on your hand is non-minimum phase - in order to tilt the pole
to the right, you initially have to swing your hand to the left and then to the right.
This is a great example of using of non-minimum-phase zeros in the control loop to
stabilize an otherwise unstable system. This effect also comes into play with rocket
launches, since a rocket engine must balance the rocket above it during take-off.
• Another example is the feeding of bacteria (their number is evaluated through a
mean value accounting for births and deaths within the overall population). When
you feed bacteria they start to eat and then forget to reproduce themselves. The
mean number of bacteria first decreases (they are still dying with the same rate)
and then increases (as they are stronger) until a new equilibrium.

ILLUSTRATION: Impact of right half plane zeros


Consider a SISO system:
s−2
G(s) = (1.3) Root Locus
(s + 1)(s + 4) 1
Imaginary Axis (seconds )
−1

The key point is that the zero is in the 0.5

right half plane (RHP). From the root- 0

loci plot it is clear that for large val-


ues of gain, the closed-loop will have −0.5

a pole in the RHP and this is totally −1


−20 −15 −10 −5 0 5
unavoidable! Real Axis (seconds )
−1

Next look at the open-loop step Step Response

response. The system has a non- 0.2

minimum phase characteristic (see ar- 0.1

row in the figure). This means that a 0

high gain control law can easily lead


Amplitude

−0.1

to instability because the initial re-


−0.2
sponse is opposite to what might be
−0.3
expected and thus the normal control
logic can get confused; it is important −0.4

to be patient (cautious or low gain) be- −0.5


0 1 2 3 4 5 6 7
Time (seconds)
fore responding to observations to en-
sure one does not respond to the non-
minimum phase characteristic.

Summary: Although you can find a PI or PID compensator to give reason-


able closed-loop behaviour for non-minimum phase systems, it is less straight-
forward to tune and critically, much harder to get high bandwidth responses.
Introduction and the industrial need for predictive control 7

STUDENT PROBLEM
Produce some examples using classical control design methods which
demonstrate how systems with non-minimum phase charactersitics
(RHP poles) achieve much lower closed-loop bandwidths in general
than systems with equivalent poles but only left half plane (LHP) ze-
ros.

ILLUSTRATION: Car example and impact of delay


A reader can immediately see the dangers of delay using the simple analogy of
driving a car. How would your driving be affected if you had to wait 2 seconds
between observing something (in effect a 2 second measurement delay) and
making a change to the controls (accelerator, brake, steering). Clearly, this delay
would cause numerous accidents, dead pedestrians, going through red lights and
so forth if driving at normal speeds.
One could only avoid such
incidents by driving slow
enough so that you were
guaranteed not to hit anything
within the next 3-4 seconds,
thus leaving 1-2 seconds for
any required action. The key
point here is that you have
to DRIVE VERY SLOWLY
and thus sacrifice closed-loop
bandwidth/performance. The
larger the delay, the more
performance is sacrificed.

1.4.2 Controlling systems with significant delays


This section demonstrates why delays within a process can have catastrophic effects
on closed-loop behaviour. Delays can be caused by problems with measurement as
some things cannot be measured instantaneously (such as blood tests in hospitals).
They may also be caused by transport delays, for example a delay between a de-
manded actuation and the impact affecting the process; for example this would be
typical where a conveyor belt is being used.
It is always worth asking whether one can change the process in order to reduce
or remove the delay. Sometimes actions such as moving a sensor are possible and
can give substantial benefits. It is always better to have as little delay as possible.
8 A first course in predictive control: 2nd Edition

Here we are less interested in the cause of the delay than its existence and will use a
simple delay model based on Laplace transforms as follows:
• Undelayed process G(s) and delayed process e−sT G(s) where T is the delay time.
• The implied phase shift (that is −wT ) caused by the delay in the Nyquist diagram
is derived from: e−sT → e− jwT ; ∠e− jwT = −wT .
Nyquist diagrams are useful for illustrating the impact of delays on the gain and
phase margins and thus indirectly on closed-loop behaviour. A delay acts like a phase
rotation within the Nyquist diagram and specifically, rotates the diagram clockwise,
thus reducing margins. The larger the delay, the more the rotation.

1.4.3 ILLUSTRATION: Impact of delay on margins and closed-loop


behaviour
Consider the following two processes (one with and one without delay):

3 3e−sT
G(s) = ; H(s) = (1.4)
s(s + 1)(s + 4) s(s + 1)(s + 4)

The Bode diagrams (a) for systems G(s), H(s) are given below with T = 1. The phase
margin has dropped from 50o to around 15o as a consequence of the delay. The gain
plot is unaffected by the delay. The corresponding closed-loop step responses (b) and
Nyquist diagrams (c) emphasise that adding delay causes a significant degradation in
performance. With a delay only a little bigger than one, the Nyquist diagram encircles
the -1 point and the system is closed-loop unstable.
Bode Diagram
2
20 G
H
Magnitude (dB)

0 G
H 1.5
-20

-40
1
-60
-90
Phase margins
Phase (deg)

0.5
-180

0
-270 0 2 4 6 8 10 12 14
-1 0 1 Seconds
10 10 10
Frequency (rad/s)
(b) Closed-loop step responses with unity
(a) Bode diagrams and margins for
negative feedback for G(s), H(s) with
G(s), H(s). T = 1.
In order to regain a reasonable phase margin, a significant reduction in gain is re-
quired (d) and thus a significant delay requires a significant loss in bandwidth in order
to retain reasonable behaviour, limited overshoot/oscillations and good margins.
Introduction and the industrial need for predictive control 9

1.4

Nyquist Diagram
1 1.2

G
0 H 1

0.8
-1
Imaginary Axis

0.6
-2
T=0
0.4
-3 T=1
0.2 T=2
-4
T=5
0
-5 0 2 4 6 8 10 12 14
-2 -1.5 -1 -0.5 0 0.5
Real Axis Seconds

(c) Nyquist diagrams for G(s), H(s) (d) Closed-loop step responses with
with T = 1. proportional gain selected to ensure a
50o phase margin with varying T .

Smith predictor
A common solution to significant delays is the so-called Smith pre-
dictor which has the structure below.
Process
output
Process

Model +
Target
Model
output
delay
-
Control law
+- Model to
+ process
error
+

It is not the purpose of this book to explore Smith predictor tech-


niques, but two observations are in order:
• The technique makes use of prediction and thus can be considered in the class of
predictive control laws, although the design methodology is not close to modern
MPC methods.
• In simple terms, the compensator is designed on a delay-free model with a correc-
tion for the difference between the model and process outputs. It is assumed that, if
the model behaves well, so will the process but the overall design can be sensitive
to errors in the assumed delay.

Summary: Classical design techniques which rely on simple gain changes


are often not appropriate for systems with delays.
10 A first course in predictive control: 2nd Edition

STUDENT PROBLEM
Produce some examples which demonstrate how delays impact upon
achievable performance using classical design methods. Standard
MATLAB R tools such as feedback.m, nyquist.m and step.m can be
used. Delay may be entered through the tf.m block, for example with:

e−3s
G = t f (1, [1, 2, 5],′ IODelay′ , 3) ⇒ G(s) =
s2 + 2s + 5

ILLUSTRATION: Constraints have only a small impact on behaviour


In some cases, the presence of a constraint may have a relatively small im-
pact. One such case would be upper input constraints in conjunction with a pro-
cess having benign dynamics. In this case, the best control one can achieve is
often to saturate the input. Consider the system and lead compensator pairing,
using the loop structure of Figure 1.1, and upper input limit in Eqn.(1.5).
1 s + 0.2
G= ; M(s) = 4 ; u(t) ≤ 0.5 (1.5)
s(s + 1)2 s+2

The figure compares the be- 1.5


Unconstrained output
haviour with and without
Unconstrained input
constraints from which it is Constrained output
clear that the simple use of 1 Constrained input
Achievable output
saturation has led to rather Achievable input
sluggish performance, es-
0.5
pecially when compared to
the achievable constrained
behaviour with an alterna- 0
tive approach (shown in
dashed lines). Nevertheless,
the constrained behaviour is -0.5
0 2 4 6 8 10
still acceptable. Seconds

1.4.4 Controlling systems with constraints


Classical control techniques assume linear analysis is valid. This includes any dis-
cussion of robustness to uncertainty and the presence of constraints.
Introduction and the industrial need for predictive control 11

ILLUSTRATION: Constraints have a major impact on behaviour


In some cases, the presence of a constraint may have a huge impact on be-
haviour if not properly accounted for. Here an illustration of a process with
almost 1st order dynamics is used to demonstrate the point. This example
(Eqn.(1.6)), again using the loop structure of Figure 1.1, includes a rate con-
straint on the input.

2s + 4 s+1 du
G= ; M(s) = ; −0.5 ≤ u(t) ≤ 4; k k ≤ 0.2 (1.6)
(s + 1)(s + 4) s dt

The corresponding closed-


loop responses are shown 3.5
here. It is immediately clear Unconstrained output
that although the uncon- 3 Unconstrained input
Constrained output
strained responses are ex- 2.5 Constrained input
cellent, the constrained re- Output of PI
2
sponses are unacceptable
and show no sign of con- 1.5
verging to the desired target
1
of one in a reasonable man-
ner. Indeed, the responses 0.5
seem to enter some form
0
of oscillation which is quite
0 2 4 6 8 10 12 14 16 18 20
unexpected from the under- Seconds
lying linear dynamics.

However, all real processes include constraints such as limits in absolute values
and rates for actuators (inputs), desired safety limits on outputs and states, desired
quality limits on outputs (linked to profit) and so forth. Whenever a system comes
up against a constraint, then the overall system behaviour is highly likely to become
non-linear and therefore any linear analysis is no longer valid. Indeed, one can easily
come up with examples where a linear feedback system is supposedly very robust to
parameter uncertainty, but the inclusion of a constraint causes instability. This section
gives some illustrations of the dangers of constraints but does not discuss classical
control solutions for such problems.
The problem in the second illustration above is caused by so-called integral sat-
uration or windup, that is where the integral term keeps increasing even though the
input has saturated. The PI is proposing to use the signal in dotted line, but the im-
plemented input is the signal in dashed line. Hence, even though the output reaches
and passes the target around t=6, the integral term is up at around 3.2 and therefore a
significant period of negative error is required before the integration brings this input
value back down to the sorts of steady-state input values required. Consequently, the
saturated input continues to be implemented even though one really wants this to
start reducing immediately the output passes the target at around t = 6. The output
12 A first course in predictive control: 2nd Edition

only begins reducing again once the output from the PI drops sufficiently to bring
this well below the saturation level, and this is not until around t = 9. In this case it is
the rate limit which has dominated the behaviour but one can easily create examples
where the absolute limit is equally problematic.
A common solution to input constraints is to put some form of anti-windup,
which detects and uses the observation that the actual input is not the same as the
controller output. Such approaches may not be easy to tune in general. Moreover, in
the modern era with ready access to online computing and indeed the expectation
that even routine PI compensators are implemented via a computer, it is possible to
be a little more systematic to reset the integral as required. This discussion is not
part of the current book, suffice to say that doing this systematically is still a major
challenge.

STUDENT PROBLEMS
1. Produce some further examples which demonstrate how including
constraints, even for SISO systems, can lead to a challenging control
design.
A m-file and Simulink pair (openloopunstable.m, openloopunsta-
blesim.slx) are provided as a template to enable you to do this more
quickly. Enter your parameters into the m-file which calls the Simulink
file to provide a simulation. It is assumed that you can create a suit-
able classical design for the unconstrained case.
2. This section has dealt solely with undelayed SISO systems for sim-
plicity. Readers who want a bigger challenge might like to produce
some examples which demonstrate how combining delays and con-
straints leads to an even more challenging control design.
3. Lecturers who wish to focus on classical methods could ask stu-
dents to code and demonstrate the efficacy of a variety of anti-windup
techniques, but these topics are beyond the remit of this book.

Summary: There is a need for control approaches which enable systematic


and straightforward constraint handling.

1.4.5 Controlling multivariable systems


Classical control techniques such as root-loci, Bode and Nyquist are targeted at SISO
systems. While some extensions to the multi-input-multi-output (MIMO) have ap-
peared in the literature, in the main these are clumsy and awkward to use and often
do not lead to systematic or satisfactory control designs. For example, one popu-
Introduction and the industrial need for predictive control 13

lar technique [93, 95] is multivariable Nyquist whereby pre- and post-compensators
(K pre , K post ) are used to diagonalise the process G(s). A diagonal process can be
treated as a set of SISO systems with no interaction between loops and thus nor-
mal SISO techniques can be used on each loop. [Connect input 1 with output 1 and
so forth, noting that inputs and outputs are based on the pre- and post-compensated
system rather than actual inputs and outputs.]
   
g11 g12 . . . g1n h11 0 . . . 0
 g21 g22 . . . g2n   0 h22 . . . 0 
   
G= . .. .. ..  ; K post GK pre ≈  .. .. .. ..  (1.7)
 .. . . .   . . . . 
gn1 gn2 ... gnn 0 0 ... hnn

The following illustrations demonstrate the consequences of using conventional PI


type approaches on a MIMO process which are not nearly diagonal.
It is clear that while a simple approach can work at times, at other times the
closed-loop behaviour is difficult to predict and significant detuning may be re-
quired to ensure stability and smooth behaviour. In practice, even with pre- and post-
compensation, it is not possible to diagonalise a process completely, but the hope is
that the off diagonal elements will be small compared to the diagonal elements, that
is:
 
h11 h12 . . . h1n
 h21 h22 . . . h2n  |h11 | ≫ |h12 | + . . . + |h1n|

H = K post GK pre =  .
 ..
.. .. ..  ; .
 .. . . . 
|hnn | ≫ |h1n | + . . . + |hn,n−1|
hn1 hn2 . . . hnn
(1.8)
In this case, one can do a design based just on the diagonal elements and this is
likely to give a reasonable result, but the bigger the off diagonal elements the less
applicable this approach will be. We will not discuss these approaches further as:
• They have limited applicability.
• Identifying suitable pre- and post-compensation is non-trivial in general.
• Pre- and post-compensation make it harder to manage constraints on the actual
inputs.
In practice, a more common practice in industry is experience of a given process
whereby over time operators have learnt a control structure which works well enough
for the MIMO process in question. It is of course quite possible that the correspond-
ing structure results in cautious control.

ILLUSTRATION: Example of MIMO system with mild interaction


Consider the system/compensator pair (1.9) where the system is close to
14 A first course in predictive control: 2nd Edition

diagonally dominant. Defining G(s), M(s) as in Figure 1.1:


" 1 0.2
#  
s+1
(s+1)2 s+3 s 0
G= 0.4 s+5 ; M= s+2 (1.9)
s+0.4 (s+2)(s+3)
0 s

The PI design is done ignoring interaction and would give the closed-loop step
responses appearing marked SISO (column 1 for changes in target 1 and col-
umn 2 for changes in target 2). It can be seen that when applied to the full
MIMO system the diagonal outputs of y11 , y22 are almost the same whereas the
off diagonal elements y12 , y21 are almost zero and hence a SISO approach to
the design has been reasonably effective. Readers can reproduce this with file
mimodiagonal.m.

1 0.1
y11

y12
SISO
0.5 MIMO 0.05

0 0
0 2 4 6 0 2 4 6

Seconds Seconds
0.2 1

0.15
21

y22

0.1 0.5
y

0.05

0 0
0 2 4 6 0 2 4 6

Seconds Seconds

ILLUSTRATION: Example of MIMO system with significant interaction


Next consider the system/compensator pair of (1.10) where the system is not
close to diagonally dominant.
" 1 0.5
#  s+1 
(s+1)2 (s+0.2)(s+3) s 0
G= 0.3 s+5 ; M = s+2 (1.10)
(s+1)(s+0.4) (s+2)(s+3)
0 s

The PI design is done ignoring interaction and would give the closed-loop step
responses appearing marked SISO (column 1 for changes in target 1 and column
Introduction and the industrial need for predictive control 15

2 for changes in target 2). It can be seen that when applied to the full MIMO sys-
tem the diagonal outputs begin almost the same as the SISO case, but rapidly the
interaction from the non-diagonal elements begins to have an effect and here the
system is actually closed-loop unstable (note that the off diagonal y12 is clearly
not converging). Readers can reproduce this with file mimodiagonal2.m.

1 0.4

0.3
11

y12
SISO
0.5 MIMO 0.2
y

0.1

0 0
0 2 4 6 0 2 4 6

Seconds Seconds
0.15 1

0.1
y21

y22

0.5
0.05

0 0
0 2 4 6 0 2 4 6

Seconds Seconds

STUDENT PROBLEMS
1. Consider the example in (1.11) which is a simplified state space
model of a power generation facility (see Section A.3.2). The inputs
u are governor valve position and fuel flow and the outputs y are steam
pressure (bar) and power (MW). Investigate the efficacy of a classical
control approach on this system.

ẋ = Ax + Bu; y = Cx + Du; (1.11)


 
0.0043 0 0.005
A =  0.02 −0.1 0 ;
0 0 0.1
 
0 −0.0041    
1.75 13.2 0 0 1.66
B=  0 0.019  ; C= ; D=
0.87 0 0 0 −0.16
0.1 0
16 A first course in predictive control: 2nd Edition

2. This chapter has dealt with constraints and interactive multivariable


systems in separate sections. Produce some examples which demon-
strate how including constraints in a multivariable problem increases
the difficulty of finding an effective control strategy.

Summary: Stability and performance analysis for a MIMO process is non-


trivial using classical techniques although for stable processes which are near di-
agonal, PID compensation is often still used and is reasonably effective. Hence,
there is a need for control approaches which enable systematic and straightfor-
ward handling of interactive multivariable systems.

1.4.6 Controlling open-loop unstable systems


Open-loop unstable systems are not particularly common, although systems which
include integral action are more so. Both cases are more challenging for traditional
control design due to an open-loop pole/poles being in the RHP or on the imaginary
axis. As a consequence, low gain control is usually unacceptable as the poles would
then remain close to the imaginary axis or in the RHP. This observation is in con-
flict with, for example, lag compensation, where the initial design step is to reduce
the gain to obtain good margins. Moreover, lag compensators move the asymptotes
of the root-loci to the right, thus giving little, if any, space for a system to achieve
good closed-loop poles. In consequence, unstable open-loop systems almost invari-
ably require some form of lead compensation, which moves asymptotes to the left
and provides phase uplift; unsurprisingly lead compensation is high gain at high fre-
quency and this is consistent with low gain control being unacceptable.
A common problem with high gain compensation is that it is far more sensi-
tive to uncertainty of all forms. High gain at high frequency means that any noise
components are amplified. Also, modelling uncertainty tends to be larger at larger
frequencies and these frequencies now correspond to regions near the critical point
of the Nyquist plot. Moreover, interaction with constraints is also more critical in
that, failure to anticipate limits on the inputs can easily put the system into an unsta-
bilisable state; controllability and stable closed-loop pole analysis are only valid if
the desired inputs can be delivered!

ILLUSTRATION: Unstable example


Consider the example of Equation (1.12) and the scenarios with and without
input constraints.
s+2 6s + 3
G= ; M(s) = ; −2.5 ≤ u ≤ 1 (1.12)
s2 + 3s − 4 s
Introduction and the industrial need for predictive control 17

3
In the former case, the in-
put can be delivered and
2
the closed-loop behaviour
appears excellent. However, 1
with input constraints acti-
y (unconstrained)
vated, the closed-loop be- 0
u (unconstrained)
haviour is divergent as the
y (constrained)
initial inputs drive the sys- -1
u (constrained)
tem into an unstabiliseable
-2
state as large enough inputs
to stabilise the system are
-3
not possible. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Seconds

Summary: Classical design techniques which rely on simple gain changes


are often not appropriate for open-loop unstable systems as high gain compen-
sation including lead characteristics are often required, but these in turn may
not be robust to uncertainty. Moreover, stability analysis is predicated on the
availability of large inputs which may not be possible.

STUDENT PROBLEM
Investigate the impact of input constraints (rate and absolute) on some
open-loop unstable examples of your choice.
A m-file and Simulink pair (openloopunstable.m, openloopunsta-
blesim.slx) are provided as a template to enable you to do this more
quickly. Enter your parameters into the m-file which calls the Simulink
file to provide a simulation. It is assumed that you can create a suit-
able classical design for the unconstrained case.

1.5 The potential value of prediction


The previous section has highlighted a number of systems where classical control
can struggle to give good solutions, that is it will not achieve closed-loop behaviour
close to performance that is possible. This is unsurprising as classical control has
18 A first course in predictive control: 2nd Edition

a limited structure and limited parameters and thus may not have the flexibility to
cater for challenging dynamics where a more involved decision making process is
required. Nevertheless, for many of these scenarios, a human operator (or controller)
is able to maintain high quality performance. Consequently it is interesting to ask
what is different about the control decision making humans deploy?

1.5.1 Why is predictive control logical?


It is recognised that as humans we are very good at controlling the world around us
and thus it is logical to analyse human behaviours and ask how we manage to be
so effective – what underpins our decision making and makes this decision making
more or less effective?
In each example next, humans use anticipation, that is prediction, to help deter-
mine effective control strategies.
EXAMPLE 1: Driving a car
1. Drivers look ahead and anticipate future targets or demands.
2. Change in the road, pedestrians, other vehicles, change in speed limit, etc.
EXAMPLE 2: Filling a vessel
1. We observe the change in depth and anticipate the future changes.
2. We modify the input flow to ensure the future depth does not exceed the
target.
EXAMPLE 3: Racquet sports

1. Players plan several shots


ahead in order to move their
opponent into a weaker posi-
tion, or to prevent themselves
being put in such a position.
2. They predict the impact of
different shot choices, and
select the ones which lead to
the most desirable outcome.

You might also like