0% found this document useful (0 votes)
12 views65 pages

Constrained Optimization

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views65 pages

Constrained Optimization

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Constrained Optimization

Objective Function
Minimize f(x )
Constraints (design requirements)
Subjected to gj(x) ≤0 ( Inequality Constraints)
hk(x) =0 (equality Constraints)
Design Variables
xi( L )  xi  xi(U )
 A point (or solution) is defined as a feasible point
(or solution) if all equality and inequality constraints
and variable bounds are satisfied at that point
 All other points are known as infeasible points. An
infeasible point can never be the optimum point
 There are two ways an inequality constraint can be
satisfied at a point—
the point falls either on the constraint surface (that is,
gj(x(t)) = 0)
on the feasible side of the constraint surface (where
gj(x(t)) is negative)
 Ifthe point falls on the constraint surface, the
constraint is said to be an active constraint at
that point.
 In the latter case, the constraint is inactive at
the point.
Following condition may arise in
Constrained Minimization
1. The constraints may have no effect on the
optimum point; that is, the constrained minimum
is the same as the unconstrained minimum
 The minimum point X∗ can be found by
making use of the necessary and sufficient
conditions
2. The optimum (unique) solution occurs on a
constraint boundary
In this case the Karush–Kuhn–Tucker (KKT)
necessary conditions indicate that the negative
of the gradient must be expressible as a positive
linear combination of the gradients of the active
constraints.
𝐽 𝐾
𝛻 𝑓 ( 𝑥 ) + ∑ 𝑢 𝑗 𝛻𝑔 𝑗 ( 𝑥 ) + ∑ 𝑣 𝑘 𝛻 h𝑘 ( 𝑥 ) =0
𝑗=1 𝑘=1
3. If the objective function has two or more
unconstrained local minima, the constrained
problem may have multiple minima
In some cases, even if the objective function has
a single unconstrained minimum, the constraints
may introduce multiple local minima
Karush–Kuhn-Tucker Conditions
(KKT)
In constrained optimization problems, points
satisfying Kuhn-Tucker conditions are likely
candidates for the optimum
Not all Karush Kuhn-Tucker points (points that
satisfy Kuhn-Tucker conditions) are optimal points
Using Lagrange multiplier technique, the
inequality and equality constraints can be added to
the objective function to form an unconstrained
problem
J

ujgj(x) = 0, j = 1, 2, . . . ,
J;
uuj jg≥j(x)
0, = 0, equation arises
j = 1,only
2, . .for
. , inequality
J. constraints.
If the j-th inequality constraint is active at a point x (that
is, gj(x) = 0), the product ujgj(x) = 0.
 if an inequality constraint is inactive at a point x (that is,
gj(x) < 0), the Lagrange multiplier uj is equal to zero,
meaning thereby that in the neighbourhood of the point x
the constraint has no effect on the optimum point.
 The final inequality condition suggests that in the case of
active constraints, the corresponding Lagrange multiplier
must be positive.
A point x(t) and two vectors u(t) and v(t) that
satisfy all the above conditions are called Kuhn-
Tucker points (also known as K-T points). There
are a total of (N + 3J + K) Kuhn-Tucker
conditions,
 the variable bounds are also considered as
inequality constraints.
 Thus, each variable bound x(L)i ≤ xi ≤ x(U) i can be
written in two inequality constraints as follows:
xi − x(L)i ≥ 0,
x(U) i − xi ≥ 0
Findif any of the point is KKT point
Minimize

Subjected to g1(x) = 26 − (x1 − 5)2 − x22 ≥ 0,


g2(x) = 20 − 4x1 − x2 ≥ 0,
x1, x2 ≥ 0.
x(1)=(1,5), x(2)=(0,0), x(3)=(3,2),
x(4)=(3.396,0),
Constrained optimization
 Direct methods
 Indirect methods
Variable Elimination Method

 For a problem with n variables and m equality constraints:

 Solve the m equality constraints and express any set of m


variables in terms of the remaining n-m variables

 Substitute these expressions into the original objective


function, the result is a new objective function involving
only n-m variables

 The new objective function is not subjected to any


constraint, and hence its optimum can be found by using the
unconstrained optimization techniques.
Variable Elimination Method
 Simple in theory

 Not convenient from a practical point of view as the


constraint equations will be nonlinear for most of the
problems

 Suitable only for simple problems


Find the dimensions of a box of largest
volume that can be inscribed in a sphere of
unit radius
Example

Find the dimensions of a box of largest volume that can be


inscribed in a sphere of unit radius

Solution: Let the origin of the Cartesian coordinate system x1,


x2, x3 be at the center of the sphere and the sides of the box be
2x1, 2x2, and 2x3. The volume of the box is given by:
f ( x1 , x2 , x3 )  8 x1 x2 x3

Since the corners of the box lie on the surface of the sphere of
unit radius, x1, x2 and x3 have to satisfy the constraint
x12  x22  x32  1
17
Example
This problem has three design variables and one equality
constraint. Hence the equality constraint can be used to
eliminate any one of the design variables from the objective
function. If we choose to eliminate x3:
x3  (1  x12  x22 )1/ 2

Thus, the objective function becomes:


f(x1,x2)=8x1x2(1-x12-x22)1/2
which can be maximized as an unconstrained function in
two variables.
From which it follows that x1*=x2*=1/3 and hence x3*=
1/3 18
Example
This solution gives the maximum volume of the box as:
8
f max 
3 3
To find whether the solution found corresponds to a
maximum or minimum, we apply the sufficiency conditions
to f (x1,x2) of the equation f (x1,x2)=8x1x2(1-x12-x22)1/2. The
second order partial derivatives of f at (x1*,x2*) are given by:
2 f 8 x1 x2 8 x2  x13 2 1/ 2 
     2 x1 (1  x1
2
 x 2) 
x12 (1  x12  x22 )1/ 2 1  x12  x22  (1  x1
2
 x 2 1/ 2
2 ) 
32
 at ( x1*, x2 *)
3
2 f 8 x1 x2 8 x1  x23 2 1/ 2 
     2 x 2 (1  x1
2
 x 2) 
x22 (1  x12  x22 )1/ 2 1  x12  x22  (1  x1
2
 x 2 1/ 2
2 ) 
32
 at ( x1*, x2 *) 19
3
Example
The second order partial derivatives of f at (x1*,x2*) are
given by:
2 f 8 x22
 8(1  x1  x2 )
2 2 1/ 2

x1x2 (1  x12  x22 )1 / 2
8 x12 x22
 [(1  x  x )
2
1
2
2
1/ 2
 ]
1 x  x
2
1
2
2
(1  x1  x2 )
2 2 1/ 2

16
 at ( x1*, x2 *)
3

the Hessian matrix of f is negative definite at (x1*,x2*). Hence


the point (x1*,x2*) corresponds to the maximum of f.

20
Complex Search Method
 The complex search method is similar to the simplex
method of unconstrained search except that the constraints
are handled.
 This method was developed by M. J. Box in 1965. The
algorithm begins with a number of feasible points created at
random.
 If a point is found to be infeasible, a new point is created
using the previously generated feasible points.
 Usually, the infeasible point is pushed towards the centroid
of the previously found feasible points.
 Once a set of feasible points is found, the worst point is
reflected about the centroid of rest of the points to find a
new point.
21
Complex Search Method
 Depending on the feasibility and function value of the new
point, the point is further modified or accepted.
 If the new point falls outside the variable boundaries, the
point is modified to fall on the violated boundary.
 If the new point is infeasible, the point is retracted towards
the feasible points. The worst point in the simplex is
replaced by this new feasible point and the algorithm
continues for the next iteration.
Algorithm of Complex Method
 Step 1 Assume a bound in x (x(L), x(U)), a reflection parameter α and termination
parameters ϵ, δ.
 Step 2 Generate an initial set of P (usually 2N) feasible points. For each point
(a) Sample N times to determine the point x i(p) in the given bound.
(b) If x(p) is infeasible, calculate x (centroid) of current set of points and reset
x(p) = x(p) + 1/2 (x − x(p)) ,until x(p) is feasible;
Else if x(p) is feasible, continue with (a) until P points are created.
(c) Evaluate f(x(p)) for p = 0, 1, 2, . . . , (P − 1).
 Step 3 Carry out the reflection step:
(a) Select xR such that f(xR) = max f(x(p)) = Fmax.
(b) Calculate the centroid x (of points except x R) and the new point
xm = x + α(x − xR).
(c) If xm is feasible and f(xm) ≥ Fmax, retract half the distance to the centroid x.
Continue until f(xm) < Fmax;
Else if xm is feasible and f(xm) < Fmax, go to Step 5.
Else if xm is infeasible, go to Step 4.
Algorithm of Complex Method
 Step 4 Check for feasibility of the solution
(a) For all i, reset violated variable bounds:
. If xim  xi( L ) set xim  xi( L ) .
If xim  xi(U ) set xim  xi(U ) .
(b) If the resulting xm is infeasible, retract half the distance to the centroid.
continue until xm is feasible. Go to Step 3(c).
 Step 5 Replace xR by xm. Check for termination.
1 p 1 p
(a ) calculate f   f ( x ) and x   x
P p P p
p 2 p 2
(b) if  ( f ( x )  f )  and  x x 
p p

Terminate;
Else set k = k + 1 and go to Step 3(a).
Minimize
Subject to
and

Take initial points as


X(1)=(0,0), X(2)=(1,1), X(3)=(2,3), X(4) =(1,2)
α=1.3
Minimize

Subjected to g1(x) = 26 − (x1 − 5)2 − x22 ≥ 0,


g2(x) = 20 − 4x1 − x2 ≥ 0,
0≤x1, x2 ≤5.
r(1)=(0.1,0.15), r(2)=(0.4,0.7), r(3)=(0.9,0.8),
r(4)=(0.6,0.1),
α = 1.3
Complex Search Method
 This method does not require the derivatives of f
(X) and gj (X) to find the minimum point, and
hence it is computationally very simple.
 The method becomes inefficient rapidly as the
number of variables increases.
 It cannot be used to solve problems having
equality constraints.
 This method requires an initial point X1 that is
feasible.
Linearized Search Techniques
 In the linearized search techniques, the objective function as
well as the constraints are linearized at a specified point.
 A linear programming (LP) method is used to solve the
problem and to find a new point.
 The objective function and constraints are further linearized
at the new point.
 This process continues until the algorithm converges close
to the desired optimum point.
 A nonlinear function f(x) can be linearized at a point x(0) by
considering the first-order term in the Taylor’s series
expansion of the function at that point:
f(x) = f(x(0)) + ∇f(x(0))(x − x(0)).
Frank-Wolfe Method
 In the Frank-Wolfe method, all constraints and the objective
function are first linearized at an initial point to form an LP
problem.
 The simplex method of LP technique is used to find a new point
by solving the LP problem.
 Thereafter, a unidirectional search is performed from the previous
point to the new point.
 At the best point found using the unidirectional search, another
LP problem is formed by linearizing the constraints and the
objective function.
 The new LP problem is solved to find another new point.
 This procedure is repeated until a termination criterion is
satisfied.
 Since LP problems are comparatively easier to solve, many NLP
problems are solved by using this technique.
Algorithm Frank-Wolfe Method
Algorithm
Step 1 Assume an initial point x(0), two convergence
parameters ϵ and δ. Set an iteration counter t = 0.
Step 2 Calculate ∇f(x(t)). If ∥∇f(x(t))∥ ≤ ϵ, Terminate;
Else go to Step 3.
Step 3 Solve the following LP problem:
Minimize f(x(t)) + ∇f(x(t))(x − x(t))
subject to
gj(x(t)) + ∇gj(x(t))(x − x(t)) ≥ 0, j = 1, 2, . . . , J;
hk(x(t)) + ∇hk(x(t))(x − x(t)) = 0, k = 1, 2, . . . ,K;
xi(L)≤ xi ≤ xi(U) i , i = 1, 2, . . . ,N.
Let y(t) be the optimal solution to the above LP problem.
Algorithm Frank-Wolfe Method
(contd)
Step 4 Find α(t) that minimizes f(x(t) + α(y(t) − x(t))) in the
range α ∈ (0, 1).
Step 5 Calculate x(t+1) = x(t) + α(t)(y(t) − x(t)).
Step 6 If ∥x(t+1) −x(t)∥ < δ∥x(t)∥ and if ∥f(x(t+1))−f(x(t))∥ <
ϵ∥f(x(t)∥,
Terminate;
Else t = t + 1 and go to Step 2.
Minimize
Subject to,
𝑥1+𝑥2−5≤0
𝑥1+2𝑥2−6≤0
𝑥1≥0 𝑥2≥0
ε1=0.001 initial point X=(2,1)
Indirect Methods
Basic idea: convert to one or more unconstrained
optimization problems
 Method of Lagrange multipliers
 Penalty function methods
• Append a penalty for violating constraints (exterior
penalty methods)
• Append a penalty as you approach infeasibility (interior
point methods)
Solution by constrained variation
Minimize f (x1,x2)
subject to g(x1,x2)=0

 A necessary condition for f to have a minimum at some


point (x1*,x2*) is that the total derivative of f (x1,x2) must
be zero at (x1*,x2*)
f f
df  dx1  dx2  0
x1 x2

 Since g(x1*,x2*)=0 at the minimum point, any variations


dx1 and dx2 taken about the point (x1*,x2*) are called
admissable variations provided that the new point lies on
the* constraint:
g ( x1  dx1 , x2  dx2 )  0
*

35
Solution by constrained variation
 Taylor’s series expansion of the function about the point
(x1*,x2*):
g * * g * *
g ( x1*  dx1 , x2*  dx2 )  g ( x1* , x2* )  ( x1 , x2 )dx1  ( x1 , x2 )dx2  0
x1 x2
 Since g(x1*, x2*)=0
g g
dg  dx1  dx2  0 at ( x1* , x2* )
x1 x2
g
0
 Assuming x2
g / x1 * *
dx2   ( x1 , x2 )dx1
g / x2
f f
df  dx1  dx2  0
 Substituting the above equation into x1 x2

f g / x1 f
df  (  ) ( x1* , x2* )
dx1  0
x1 g / x2 x2
36
Solution by constrained variation
f g / x1 f
df  (  ) ( x1* , x2* )
dx1  0
x1 g / x2 x2

 The expression on the left hand side is called the


constrained variation of f

 Since dx1 can be chosen arbitrarily:

f g f g
(  ) ( x1* , x2* )
0
x1 x2 x2 x1
 This equation represents a necessary condition in order to
have (x1*,x2*) as an extreme point (minimum or
maximum)
37
Example
A beam of uniform rectangular cross section is to be
cut from a log having a circular cross section of
diameter 2a. The beam has to be used as a
cantilever beam (the length is fixed) to carry a
concentrated load at the free end. Find the
dimensions of the beam that correspond to the
maximum tensile (bending) stress carrying capacity.
Lagrange multipliers
Problem with two variables and one constraint:

Minimize f (x1,x2)

Subject to g(x1,x2)=0

For this problem, the necessary condition was found to be:


f f / x2 g
(  ) ( x1* , x2* )
0
x1 g / x2 x1

By defining a quantity , called the Lagrange multiplier as:


 f / x2 
   
 g / x2  ( x *,x *)
1 2

39
Lagrange multipliers
 The derivation of the necessary conditions by the
method of Lagrange multipliers requires that at least one
of the partial derivatives of g(x1,x2) be nonzero at an
extreme point.

 The necessary conditions are more commonly generated


by constructing a function L,known as the Lagrange
function, a

L( x1 , x2 ,  )  f ( x1 , x2 )   g ( x1 , x2 )
Solution by Lagrange multipliers
Problem with two variables and one constraint:

• By treating L as a function of the three variables x1, x2 and


, the necessary conditions for its extremum are given by:
L f g
( x1 , x2 ,  )  ( x1 , x2 )   ( x1 , x2 )  0
x1 x1 x1
L f g
( x1 , x2 ,  )  ( x1 , x2 )   ( x1 , x2 )  0
x2 x2 x2
L
( x1 , x2 ,  )  g ( x1 , x2 )  0


41
Example
Example: Find the solution using the Lagrange multiplier method.
Minimize f ( x, y )  kx 1 y 2
subject to
g ( x, y )  x 2  y 2  a 2  0
Solution
The Lagrange function is

L( x, y,  )  f ( x, y )  g ( x, y )  kx 1 y 2   ( x 2  y 2  a 2 )
The necessary conditions for the minimum of f ( x, y )
L
  kx  2 y  2  2 x  0
x
L
 2kx 1 y 3  2 y  0
y
L
 x2  y2  a2  0

42
Example
Solution cont’d
which yield:

k 2k
2  
x 3 y 2 xy 4
from which the relation x*  (1 / 2) y * can be obtained. This relation along with
L
 x 2  y 2  a 2  0 gives the optimum solution as :

a a
x*  and y*  2
3 3

43
Solution by Lagrange multipliers

Necessary conditions for a general problem:

Minimize f(X)

subject to

gj (X)= 0, j=1, 2,….,m

The Lagrange function, L, in this case is defined by introducing one


Lagrange multiplier j for each constraint gj(X) as

L( x1 , x2 ,  , xn , 1 , 2 ,  , m )  f ( X)  1 g1 ( X)  2 g 2 ( X)    m g m ( X)

44
Solution by Lagrange multipliers
By treating L as a function of the n+m unknowns, x1, x2,…,xn,1, 2,
…, m, the necessary conditions for the extremum of L, which also
corresponds to the solution of the original problem are given by:

L f m g j
  j  0, i  1,2, , n
xi xi j 1 xi
L
 g j ( X)  0, j  1,2,  , m
 j

The above equations represent n+m equations in terms of the n+m


unknowns, xi and j

45
Solution by Lagrange multipliers
The solution:

 x1*  1* 
 *  *
x   
X*   2  and *   2 
  
 x*  * 
 n  m

The vector X* corresponds to the relative constrained minimum of


f(X) (sufficient conditions are to be verified)

46
Example 1
Find the dimensions of a cylindirical tin (with top and bottom) made
up of sheet metal to maximize its volume such that the total surface
area is equal to A0=24.

47
 Solution

If x1 and x2 denote the radius of the base and length


of the tin, respectively, the problem can be stated
as:
Maximize f (x1,x2) = x12x2

subject to
2x12  2x1 x2  A0  24
Example 1
Solution

Maximize f (x1,x2) = x12x2

subject to 2x12  2x1 x2  A0  24

The Lagrange function is:


L( x1 , x2 ,  )   x12   (2x12  2x1 x2  A0 )
and the necessary conditions for the maximum of f give:
L
 2x1 x2  4x1  2x2  0
x1
L
 x12  2x1  0
x2
L
 2x12  2x1 x2  A0  0
 49
Example 1
Solution
x1 x2 1
   x1
2 x1  x2 2

that is,
1
x1  x2
2

The above equations give the desired solution as:

1/ 2 1/ 2 1/ 2
A   2A   A 
x1*   0  , x2*   0  , and *   0 
 6   3   24 

50
Multivariable optimization with
inequality constraints
Minimize f (X)

subject to

gj (X) ≤ 0, j=1, 2,…,m

The inequality constraints can be transformed to equality constraints


by adding nonnegative slack variables, yj2, as

gj (X) + yj2 = 0, j = 1,2,…,m

where the values of the slack variables are yet unknown.

51
Multivariable optimization with
inequality constraints
Minimize f(X) subject to

Gj(X,Y)= gj (X) + yj2=0, j=1, 2,…,m

 y1 
y 
 
where Y 2 is the vector of slack variables
 
 ym 

This problem can be solved by the method of Lagrange multipliers.


For this, the Lagrange function L is constructed as:
m
L( X, Y,  )  f ( X)    j G j ( X, Y)
j 1

1 
 
 
where    2  is the vector of Lagrange multiplier s

m 

52
Multivariable optimization with
inequality constraints
The stationary points of the Lagrange function can be found by
solving the following equations (necessary conditions):
L f m g j
( X, Y,  )  ( X)    j ( X)  0, i  1,2,  , n
xi xi j 1 xi
L
( X, Y,  )  G j ( X, Y)  g j ( X)  y 2j  0, , j  1,2,  , m
 j
L
( X, Y,  )  2 j y j  0, j  1,2,  , m
y j
(n+2m) equations
(n+2m) unknowns

The solution gives the optimum solution vector x*, the Lagrange
multiplier vector, *, and the slack variable vector, y*.

53
Multivariable optimization with
inequality constraints
Equation

L
( X, Y,  )  G j ( X, Y)  g j ( X)  y 2j  0, , j  1,2,  , m
 j
ensure that the constraints
g j ( X)  0, j  1,2,  , m

are satisfied, while the equation


L
( X, Y,  )  2 j y j  0, j  1,2, , m
y j

implies that either j=0 or yj=0

54
Penalty Function Method
Minimize f (x)
subject to gj (x) ≤ 0, j=1, 2,…,m

 Penalty is added to function for violating the constraints


m
k   ( x, rk )  f ( x)  rk  G j [ g j ( x)]
j 1

 where Gj is some function of the constraint gj , and rk is a


positive constant known as the penalty parameter
 The penalty function formulations for inequality
constrained problems can be divided into two
categories: interior and exterior methods. In the interior
formulations, some popularly used forms of Gj are given
1
by Gj  
g ( x) j

G j  log[ g j ( x)]
Penalty Function Method
 Forms of the function Gj in the case of exterior penalty
function formulations are
G j  max[0, g j ( x)]

 
2
G j  max[0, g j ( x)]
Find X = {x1} which minimizes f (X) = αx1
subject to g1(X) = β − x1 ≤ 0

Interior Penalty Exterior Penalty


Interior Penalty Function Method
 The penalty function is chosen such that its value will be
small at points away from the constraint boundaries and will
tend to infinity as the constraint boundaries are approached.
 Hence the value of the φ function also “blows up” as the
constraint boundaries are approached
 Thus once the unconstrained minimization of φ(x, r k) is
started from any feasible point x1, the subsequent points
generated will always lie within the feasible domain since the
constraint boundaries act as barriers during the minimization
process.
 This is why the interior penalty function methods are also
known as barrier methods. The φ function defined originally
by Carroll    ( x, r )  f ( x )  r
m 1
k k k 
j 1 g j ( x)
Interior Penalty Function Method
 The value of the function φ will always be greater than f since gj
(x) is negative for all feasible points X.
 If any constraint gj (x) is satisfied critically (with equality sign),
the value of φ tends to infinity.
 It is to be noted that the penalty term in is not defined if x is
infeasible.
 This introduces serious shortcoming as it does not allow any
constraint to be violated,
 It requires a feasible starting point for the search toward the
optimum point. f (X).
. m 1
k   ( x, rk )  f ( x)  rk 
j 1 g j ( x )
Algorithm for Interior Penalty Function
Step1. Start with an initial feasible point x1 satisfying all
the constraints with strict inequality sign, that is, gj (x1) <
0 for j = 1, 2, . . . ,m, and an initial value of r1 >0. Set k =
1.
Step2. Minimize φ(x, rk) by using any of the unconstrained
minimization methods and obtain the solution xk∗ .
Step 3. Test whether xk∗ is the optimum solution of the
original problem. If xk∗ is found to be optimum, terminate
the process. Otherwise, go to the next step.
Step 4. Find the value of the next penalty parameter, rk+1, as
rk+1 = crk where c < 1.
Step 5. Set the new value of k = k + 1, take the new starting
point as x1 = xk∗ , and go to step 2.
 In practice, a value of r1 that gives the value of φ(X1, r1)
approximately equal to 1.1 to 2.0 times the value of f (X1)
has been found to be quite satisfactory in achieving quick
convergence of the process

Once the initial value of rk is chosen, the subsequent values of


rk+1 have to be chosen such that
rk+1 < rk
For convenience, the values of rk are chosen according to the
relation
rk+1 = crk
where c < 1. The value of c can be taken as 0.1, 0.2, or 0.5.
Interior penalty Method
Minimize f(x)=100/x
Subject to x-5≤0
Initial value x=2
r=100
Exterior Penalty Function Method
 In the exterior penalty function method, the φ function is
generally taken as
m
k   ( x, rk )  f ( x)  rk  ( g j ( x)) q
j 1

 where rk is a positive penalty parameter, the exponent q is a


nonnegative constant, and the bracket function gj(x) is
defined as
g j ( x)  max[0, g j ( x)]
 g j ( x) if g j ( x)  0 
 
 (constrained is voilated ) 
 
0 if g j ( x)  0 
 (constrained is satisfied 

Exterior Penalty Function Method
Algorithm
Steps:
1. Start from any design X1 and a suitable value of r1. Set k = 1.
2. Find the vector xk* that minimizes the function
m
k   ( x, rk )  f ( x)  rk  ( g j ( x)) q
j 1

3. Test whether the point xk* satisfies all the constraints. xk* is
feasible, it is the desired optimum and hence terminate the
procedure. Otherwise, go to step 4.
4. Choose the next value of the penalty parameter that satisfies
the relation rk+1 >rk
and set the new value of k as original k plus 1 and go to step 2.
Usually, the value of rk+1 is chosen according to the relation rk+1
Minimize f(x)=100/x
Subject to x-5≤0
Initial value x=8
r1=1

You might also like