Constrained Optimization
Constrained Optimization
Objective Function
Minimize f(x )
Constraints (design requirements)
Subjected to gj(x) ≤0 ( Inequality Constraints)
hk(x) =0 (equality Constraints)
Design Variables
xi( L ) xi xi(U )
A point (or solution) is defined as a feasible point
(or solution) if all equality and inequality constraints
and variable bounds are satisfied at that point
All other points are known as infeasible points. An
infeasible point can never be the optimum point
There are two ways an inequality constraint can be
satisfied at a point—
the point falls either on the constraint surface (that is,
gj(x(t)) = 0)
on the feasible side of the constraint surface (where
gj(x(t)) is negative)
Ifthe point falls on the constraint surface, the
constraint is said to be an active constraint at
that point.
In the latter case, the constraint is inactive at
the point.
Following condition may arise in
Constrained Minimization
1. The constraints may have no effect on the
optimum point; that is, the constrained minimum
is the same as the unconstrained minimum
The minimum point X∗ can be found by
making use of the necessary and sufficient
conditions
2. The optimum (unique) solution occurs on a
constraint boundary
In this case the Karush–Kuhn–Tucker (KKT)
necessary conditions indicate that the negative
of the gradient must be expressible as a positive
linear combination of the gradients of the active
constraints.
𝐽 𝐾
𝛻 𝑓 ( 𝑥 ) + ∑ 𝑢 𝑗 𝛻𝑔 𝑗 ( 𝑥 ) + ∑ 𝑣 𝑘 𝛻 h𝑘 ( 𝑥 ) =0
𝑗=1 𝑘=1
3. If the objective function has two or more
unconstrained local minima, the constrained
problem may have multiple minima
In some cases, even if the objective function has
a single unconstrained minimum, the constraints
may introduce multiple local minima
Karush–Kuhn-Tucker Conditions
(KKT)
In constrained optimization problems, points
satisfying Kuhn-Tucker conditions are likely
candidates for the optimum
Not all Karush Kuhn-Tucker points (points that
satisfy Kuhn-Tucker conditions) are optimal points
Using Lagrange multiplier technique, the
inequality and equality constraints can be added to
the objective function to form an unconstrained
problem
J
ujgj(x) = 0, j = 1, 2, . . . ,
J;
uuj jg≥j(x)
0, = 0, equation arises
j = 1,only
2, . .for
. , inequality
J. constraints.
If the j-th inequality constraint is active at a point x (that
is, gj(x) = 0), the product ujgj(x) = 0.
if an inequality constraint is inactive at a point x (that is,
gj(x) < 0), the Lagrange multiplier uj is equal to zero,
meaning thereby that in the neighbourhood of the point x
the constraint has no effect on the optimum point.
The final inequality condition suggests that in the case of
active constraints, the corresponding Lagrange multiplier
must be positive.
A point x(t) and two vectors u(t) and v(t) that
satisfy all the above conditions are called Kuhn-
Tucker points (also known as K-T points). There
are a total of (N + 3J + K) Kuhn-Tucker
conditions,
the variable bounds are also considered as
inequality constraints.
Thus, each variable bound x(L)i ≤ xi ≤ x(U) i can be
written in two inequality constraints as follows:
xi − x(L)i ≥ 0,
x(U) i − xi ≥ 0
Findif any of the point is KKT point
Minimize
Since the corners of the box lie on the surface of the sphere of
unit radius, x1, x2 and x3 have to satisfy the constraint
x12 x22 x32 1
17
Example
This problem has three design variables and one equality
constraint. Hence the equality constraint can be used to
eliminate any one of the design variables from the objective
function. If we choose to eliminate x3:
x3 (1 x12 x22 )1/ 2
16
at ( x1*, x2 *)
3
20
Complex Search Method
The complex search method is similar to the simplex
method of unconstrained search except that the constraints
are handled.
This method was developed by M. J. Box in 1965. The
algorithm begins with a number of feasible points created at
random.
If a point is found to be infeasible, a new point is created
using the previously generated feasible points.
Usually, the infeasible point is pushed towards the centroid
of the previously found feasible points.
Once a set of feasible points is found, the worst point is
reflected about the centroid of rest of the points to find a
new point.
21
Complex Search Method
Depending on the feasibility and function value of the new
point, the point is further modified or accepted.
If the new point falls outside the variable boundaries, the
point is modified to fall on the violated boundary.
If the new point is infeasible, the point is retracted towards
the feasible points. The worst point in the simplex is
replaced by this new feasible point and the algorithm
continues for the next iteration.
Algorithm of Complex Method
Step 1 Assume a bound in x (x(L), x(U)), a reflection parameter α and termination
parameters ϵ, δ.
Step 2 Generate an initial set of P (usually 2N) feasible points. For each point
(a) Sample N times to determine the point x i(p) in the given bound.
(b) If x(p) is infeasible, calculate x (centroid) of current set of points and reset
x(p) = x(p) + 1/2 (x − x(p)) ,until x(p) is feasible;
Else if x(p) is feasible, continue with (a) until P points are created.
(c) Evaluate f(x(p)) for p = 0, 1, 2, . . . , (P − 1).
Step 3 Carry out the reflection step:
(a) Select xR such that f(xR) = max f(x(p)) = Fmax.
(b) Calculate the centroid x (of points except x R) and the new point
xm = x + α(x − xR).
(c) If xm is feasible and f(xm) ≥ Fmax, retract half the distance to the centroid x.
Continue until f(xm) < Fmax;
Else if xm is feasible and f(xm) < Fmax, go to Step 5.
Else if xm is infeasible, go to Step 4.
Algorithm of Complex Method
Step 4 Check for feasibility of the solution
(a) For all i, reset violated variable bounds:
. If xim xi( L ) set xim xi( L ) .
If xim xi(U ) set xim xi(U ) .
(b) If the resulting xm is infeasible, retract half the distance to the centroid.
continue until xm is feasible. Go to Step 3(c).
Step 5 Replace xR by xm. Check for termination.
1 p 1 p
(a ) calculate f f ( x ) and x x
P p P p
p 2 p 2
(b) if ( f ( x ) f ) and x x
p p
Terminate;
Else set k = k + 1 and go to Step 3(a).
Minimize
Subject to
and
35
Solution by constrained variation
Taylor’s series expansion of the function about the point
(x1*,x2*):
g * * g * *
g ( x1* dx1 , x2* dx2 ) g ( x1* , x2* ) ( x1 , x2 )dx1 ( x1 , x2 )dx2 0
x1 x2
Since g(x1*, x2*)=0
g g
dg dx1 dx2 0 at ( x1* , x2* )
x1 x2
g
0
Assuming x2
g / x1 * *
dx2 ( x1 , x2 )dx1
g / x2
f f
df dx1 dx2 0
Substituting the above equation into x1 x2
f g / x1 f
df ( ) ( x1* , x2* )
dx1 0
x1 g / x2 x2
36
Solution by constrained variation
f g / x1 f
df ( ) ( x1* , x2* )
dx1 0
x1 g / x2 x2
f g f g
( ) ( x1* , x2* )
0
x1 x2 x2 x1
This equation represents a necessary condition in order to
have (x1*,x2*) as an extreme point (minimum or
maximum)
37
Example
A beam of uniform rectangular cross section is to be
cut from a log having a circular cross section of
diameter 2a. The beam has to be used as a
cantilever beam (the length is fixed) to carry a
concentrated load at the free end. Find the
dimensions of the beam that correspond to the
maximum tensile (bending) stress carrying capacity.
Lagrange multipliers
Problem with two variables and one constraint:
Minimize f (x1,x2)
Subject to g(x1,x2)=0
39
Lagrange multipliers
The derivation of the necessary conditions by the
method of Lagrange multipliers requires that at least one
of the partial derivatives of g(x1,x2) be nonzero at an
extreme point.
L( x1 , x2 , ) f ( x1 , x2 ) g ( x1 , x2 )
Solution by Lagrange multipliers
Problem with two variables and one constraint:
41
Example
Example: Find the solution using the Lagrange multiplier method.
Minimize f ( x, y ) kx 1 y 2
subject to
g ( x, y ) x 2 y 2 a 2 0
Solution
The Lagrange function is
L( x, y, ) f ( x, y ) g ( x, y ) kx 1 y 2 ( x 2 y 2 a 2 )
The necessary conditions for the minimum of f ( x, y )
L
kx 2 y 2 2 x 0
x
L
2kx 1 y 3 2 y 0
y
L
x2 y2 a2 0
42
Example
Solution cont’d
which yield:
k 2k
2
x 3 y 2 xy 4
from which the relation x* (1 / 2) y * can be obtained. This relation along with
L
x 2 y 2 a 2 0 gives the optimum solution as :
a a
x* and y* 2
3 3
43
Solution by Lagrange multipliers
Minimize f(X)
subject to
L( x1 , x2 , , xn , 1 , 2 , , m ) f ( X) 1 g1 ( X) 2 g 2 ( X) m g m ( X)
44
Solution by Lagrange multipliers
By treating L as a function of the n+m unknowns, x1, x2,…,xn,1, 2,
…, m, the necessary conditions for the extremum of L, which also
corresponds to the solution of the original problem are given by:
L f m g j
j 0, i 1,2, , n
xi xi j 1 xi
L
g j ( X) 0, j 1,2, , m
j
45
Solution by Lagrange multipliers
The solution:
x1* 1*
* *
x
X* 2 and * 2
x* *
n m
46
Example 1
Find the dimensions of a cylindirical tin (with top and bottom) made
up of sheet metal to maximize its volume such that the total surface
area is equal to A0=24.
47
Solution
subject to
2x12 2x1 x2 A0 24
Example 1
Solution
that is,
1
x1 x2
2
1/ 2 1/ 2 1/ 2
A 2A A
x1* 0 , x2* 0 , and * 0
6 3 24
50
Multivariable optimization with
inequality constraints
Minimize f (X)
subject to
51
Multivariable optimization with
inequality constraints
Minimize f(X) subject to
y1
y
where Y 2 is the vector of slack variables
ym
1
where 2 is the vector of Lagrange multiplier s
m
52
Multivariable optimization with
inequality constraints
The stationary points of the Lagrange function can be found by
solving the following equations (necessary conditions):
L f m g j
( X, Y, ) ( X) j ( X) 0, i 1,2, , n
xi xi j 1 xi
L
( X, Y, ) G j ( X, Y) g j ( X) y 2j 0, , j 1,2, , m
j
L
( X, Y, ) 2 j y j 0, j 1,2, , m
y j
(n+2m) equations
(n+2m) unknowns
The solution gives the optimum solution vector x*, the Lagrange
multiplier vector, *, and the slack variable vector, y*.
53
Multivariable optimization with
inequality constraints
Equation
L
( X, Y, ) G j ( X, Y) g j ( X) y 2j 0, , j 1,2, , m
j
ensure that the constraints
g j ( X) 0, j 1,2, , m
54
Penalty Function Method
Minimize f (x)
subject to gj (x) ≤ 0, j=1, 2,…,m
G j log[ g j ( x)]
Penalty Function Method
Forms of the function Gj in the case of exterior penalty
function formulations are
G j max[0, g j ( x)]
2
G j max[0, g j ( x)]
Find X = {x1} which minimizes f (X) = αx1
subject to g1(X) = β − x1 ≤ 0
3. Test whether the point xk* satisfies all the constraints. xk* is
feasible, it is the desired optimum and hence terminate the
procedure. Otherwise, go to step 4.
4. Choose the next value of the penalty parameter that satisfies
the relation rk+1 >rk
and set the new value of k as original k plus 1 and go to step 2.
Usually, the value of rk+1 is chosen according to the relation rk+1
Minimize f(x)=100/x
Subject to x-5≤0
Initial value x=8
r1=1