Chapter 4 - Constrained Optimization
Chapter 4 - Constrained Optimization
Constrained Optimization
Introduction
In economic optimization problems, the variables involved are often required to satisfy certain
constraints. In the case of unconstrained optimization problems, no restrictions have been made
regarding the value of the choice variables. But in reality, optimization of a certain economic
function should be in line with certain resource requirement or availability. This emanates from
the problem of scarcity of resources. For example; maximization of production should be subject
to the availability of inputs. Minimization of costs should also satisfy a certain level of output.
The other constraint in economics is the non negativity restriction. Although sometimes negative
values may be admissible, most functions in economics are meaningful only in the first quadrant.
So, these constraints should be considered in the optimization problems.
Constrained optimization deals with optimization of the objective function (the function to be
optimized) subject to constraints (restrictions). In the case of linear objective and constraint
functions, the problems are solved using linear programming model. But when we face a non
linear function, we use the concept of derivatives for optimization. This chapter focuses on
optimization of non linear constrained functions.
du ∂u ∂u ∂ x 2
= + . =0
dx 1 ∂ x 1 ∂ x 2 ∂ x 1
=Mu 1 + Mu 2
( −p 1
p2 )
=0
p1
⇒ Mu 1 =Mu 2 .
p2
Mu 1 p 1
⇒ =
Mu 2 p 2
⇒ u= x 1 30− ( 1
4 )
x 1 =30 x 1 −
1
4
x 21
du 1
=30− x 1
F.O.C.( First Order Condition) dx 1 2 =0
L= f ( x 1 , x 2 ) + λ ( c− g ( x 1 , x 2 ) )
L denotes the Lagrange.
An interpretation of the Lagrange Multiplier
The Lagrange multiplier, λ , measures the effect on the objective function of a one unit change
in the constant of the constraint function (stock of a resource) . To show this; given the
Lagrangian function;
L= f ( x 1 , x 2 ) + λ ( c− g ( x 1 , x 2 ) )
F.O.C.
L1 =f 1 ( x1 x 2 )−λg 1 ( x 1 , x 2 )=0
L2 =f 2 ( x 1 x 2 )−λg 2 ( x 1 , x 2 )=0
Lλ=c−g ( x 1 , x2 )=0
¿ ¿ ¿
We can express the optimal choice of variables λ , x 1 and . x2 as implicit functions of the
parameter c:
¿ ¿
x 1=x 1 ( c )
x ¿2 =x¿2 ( c )
¿ ¿
λ = λ (c )
¿ ¿ ¿
Now since the optimal value of L depends on
λ , x 1 .and .x 0 we may consider L*to be a
function of c alone. That is,
( ) + λ [ c− g ( x )] ,
( c) (c ) ( c) ( c) ( c)
L∗¿ f x¿1 , x ¿2 ¿ ¿
1 , x ¿2
¿ ¿ ¿
dx dx dλ
=( f x −gx 1 λ g x ) 1 + ( f x − λ¿ g x ) 2 + [ c−g ( x¿1 , x¿2 ) ]
¿
+ λ¿
1 dc1 2 2 dc dc
The expressions with a sign of cancellation are all equal to zero.
¿
dL ¿
=λ
dc
Note:- If λ> 0 , it means that for every one unit increase ( decrease ) in the constant of the
constraining function, the objective function will decrease ( increase by a value approximately
equal to λ .
If λ< 0 , a unit increase (decrease) in the constant of the constraint will lead to an increase
(decrease ) in the value of the objective function by approximately equal to λ .
S.O.C. Conditions for a constrained optimization problem
Max :f ( x , y ) subject to, g ( x , y )
L=f ( x , y ) +λg ( x , y )
F.O.C.
L λ=g ( x , y )=0
Lx =f x + λ 2 x =0 |
L y =f y + λ2 y =0
L λλ =0 , L λx =g x , L λ =g y
S.O.C: y
L yy =f yy +λg yy
In matrix form:
( )( )
Lλλ Lλx L λy 0 gx gy
¿ Lxλ Lxx L xy = g x L xx Lxy
L yλ L yx L yy gy L yx L yy
This matrix is called Bordered Hessian and its determinant is denoted by|H̄|.
L L
|Η|=| xx xy |
The bordered Hessian |H̄| is simply the plain Hessian
L yx L yy bordered by the
first order derivatives of the constraint with zero on the principal diagonal.
2
Determinant Criterion for the sign definiteness of d z .
0 gx g y
2
d z .is {
Positive . definite
Negative . definite }
subject .to . dg=0 . iff| g x L xx L xy |
g y L yx L yy
¿ 0→ min
¿ 0→ Max
n- variable case
f ( x1 , x 2 ⋯⋯x n )
Given
g ( x 1 , x 2 ⋯⋯ x n ) =c
Subject to
0 g1 g2 ⋯⋯ g n
g L L12 L1n
|H̄|=| 1 11 |
⋮
gn Ln 1 K n2 Lnn Its bordered leading principal minors can be defined as:
0 g1 g 2 g3
0 g1 g2 g L11 L12 L13
=| 1 |
|H̄ 2|=|g 1 L11 L12|,|H̄ 3| g 2 L21 L22 L23
g 2 L21 L22 g 3 L31 L32 L33 etc.
Conditions Maximization Minimization
F.O.C.
L λ=L1 =L2 ⋯⋯ Ln = 0 The same i.e.
L λ=L1 =L2 ⋯⋯ Ln = 0
S.O.C.
|H̄ 2|>0,|H̄ 3|<0| , ||H̄ 2|,|H̄ 3||⋯⋯|H̄ n<0|
|H̄ 4|>0 ,⋯⋯ (−1 )n|H̄ n|>0
( )
0 g1 g2 0 3 4
g1 L xx L xy ⇒|H̄|=|3 0 2|
g2 L yx L yy 4 2 0
S.O.C:
|H̄|=|H̄ 2|=48>0→ Negative definite ( Max )
g2 ( x 1 , x 2 , x 3 ) =c 2
L=f ( x 1 , x 2 , x 3 ) + λ1 ( c 1 −g 1 ( x 1 , x 2 x 3 ) )+ λ2 ( C2 −g2 ( x1 , x 2 x 3 ) )
1 2
F.O.C.: L1 =f 1 −λ 1 g 1− λ2 g1 =0
L2 =f 2 −λ 1 g 12− λ2 g22 =0
L3 =f 3− λ1 g13 −λ 2 g 23=0
1 1
L λ =C − g ( x 1 , x 2 , x 3 ) =0
1
2 2
L λ =C − g ( x 1 , x 2 , x 3 ) =0
2
0 0 g11 g12 g 13
0 0 g21 g22 g 23
H̄=| g11 g 21 L11 L12 L13 |
g12 g 22 L21 L22 L23
|H̄ 2|= is one that contains L as the last element of its principal diagonal
22
|H̄ 3 =|is one that contains L as the last element of its principal diagonal.
33
A Jacobian Determinant
A Jacobian determinant permits testing for functional dependency for linear & non – linear
functions. A Jacobian determinant is composed of all the first order partial derivatives of a
system of equations arranged in ordered sequence.
y 1 =f 1 ( x 1 , x 2 , x 3 )
Example given
y 2 =f 2 ( x 1 , x 2 , x 3 )
y 3 =f 3 ( x 1 , x 2 , x 3 )
∂ y1 ∂ y1 ∂ y1
∂ x1 ∂ x2 ∂ x3
∂y ∂ y2 ∂ y2
|J|=| 2 |
∂ x1 ∂ x2 ∂ x3
∂ y3 ∂ y3 ∂ y3
∂ x1 ∂ x2 ∂ x3
- If |J|=0, then there is functional dependency among equations and hence no unique
solution exists.
∂ y1 ∂ y1
y 12 means y 11 means
E.g y 1 =4 x 1 −x 2 ⇒ y 11 =4∧ y 12 =−1 ∂ x 2 and ∂ x1
∂ y2
y 2 =16 x 21 +8 x 1 x 2 + x 22 ⇒ =32 x 1 +8 x 2
∂ x1
∂ y2
=8 x 1 + 2 x 2
∂ x2
4 −1
|J|=| |=64 x 1+16 x 2 ≠0,
32 x 1 +8 x 2 8 x 1 2 x 2 thus, the system has a solution because the value
of the Jacobian determinant is non-singular, i.e., different from zero.
As stated in the preceding paragraphs, partial derivatives can provide a means of testing
whether there exists functional (linear or nonlinear) dependence among a set of n functions in
n variables. This is related to the notion of Jacobian determinant.
Exercise
2 2 2
1. Optimize: y=3 x 1 −5 x1 −x 1 x 2 +6 x 2 −4 x 2 +2 x 2 x 3 +4 x 3 +2 x 3−3 x 1 x 3 , using.
a) The Cramer’s rule for the first order condition.
b) The Hessian for the S.O.C.
2. A monopolistic firm produced two related goods. The demand and total cost functions
are:
Q1 =100−2 p1 +2 p2
Q2 =75+0 . 5 p1 − p2
TC=Q 21 +2 Q 1 Q 2 + Q 22
Optimize the firms profit function by:
a) Finding the inverse demand function.
b) Using cramer’s rule for the f.o.c.
c) Using the Hessian for the s.o.c.
3. Maximize a firm’s total cost:
c=45 x 2 +90 xy+90 y 2 , when the firm has to meet a production quota: 2 x+3 y =60 , by:
a) finding the critical values
b) using the Bordered Hessian to test the S.O.C.
4. A monopolist firm produces two substitute goods and faces the following demand
functions.
X =80−P x +2 p y
y=110+2 p x −6 p y
2 2
Its total cost function is : TC=0 .5 x +xy + y what is the profit maximizing level of
}
6 x 1 +2 x 2≤36
5 x 1 +5 x 2≤40
subject to
2 x1 + 4 x 2 ≤28
x1 , x 2 ≥0
Here we add slack variables to make the constraint binding
(active)
}
2 x 1 + x 2 ≥14
x + x ≥12
Min: C=2 x 1 + 4 x 2 : subject to 1 2
x 1 +3 x 2≥18
x 1 , x 2≥0
Example-2: Here we subtract surplus values
to make the constant binding. C denotes cost.
Note :- If at optimality the slack/surplus of a constraint is zero, then the constraint is said to
be active or binding. Otherwise, it is inactive.
Two variable, one constraint case
max Π=f ( x 1 , x2 ) , subject to , g ( x 1 , x 2 ) ≤r
x 1 , x2 ≥0
And slack variable to change the constraint into strict equality, thus .
Max Π =f ( x1 , x 2 ) , subject to , g ( x 1 , x 2 ) + s=r
x 1 , x2 , s≥0
Step1: Form the Lagrangian function
L= f ( x 1 , x 2 ) + λ [ r− g ( x 1 , x2 )− s ]
Step2:Find the first order partials with respect to each of the choice variables and set them equal
to zero.
∂L
L x ≤0 , x 1 ¿ 0 . and . x 1 =0⋯⋯ ( 1 )
1 ∂ x1
∂L
L x ≤0 , x 2 ¿ ,and . x 1 =0⋯⋯( 2 )
2 ∂ x2
∂L
Ls ≤0 , s≥0 , and . s =0⋯⋯⋯( 3 )
∂s
L λ=r−g ( x1 , x 2 ) −s=0⋯⋯⋯ ( 4 )
Now, we can consolidate equation (3) and (4) and in the process, eliminate the dummy variables.
∂L
=−λ=
i.e. ∂ s , thus equation (3) tells us that −λ≤0 , s≥0 ,and −λs=0 , or equivalently
λ≥0 , s≥0 ,and . λs=0⋯⋯⋯ (5 )
From equation(4) we have
s=r−g ( x 1 , x 2 )
- thus, the Kuhn-Tucker( K-T) conditions for maximization problem with inequality
constraint are:
∂ L1 ∂L
=f −λg1 ≤0 , x1 ≥0 , and . x 1 =0
∂ x1 1 ∂ x1
∂ L2 ∂L
=f 2 −λg2 ≤0 , x 2≥0 , and . x 2 =0
∂ x2 ∂ x2
∂L
=r−g ( x1 , x 2 ) ≥0 , λ≥0 , and . λ [ r−g ( x 1 , x 2 ) ] =0
∂λ
- Kuhn-Tucker(K-T) conditions for minimization problem with inequality constraint:
L x =f 1− λg1 ¿ 0 , x 1 ¿ 0 , and . x 1 Lx1 =0
1
L x =f 2− λg2 ¿ 0 , x 2 ¿ 0 , and . x2 Lx 2 =0
2
Step -2: solve for the first order condition (F.O.C.) and set the partials with respect to each of
the choice variables equal to zero.
L x =10−2 x 2 −λ≤0⋯⋯⋯⋯( 1 )
1
L x =180−2 x 2 −λ≤0⋯⋯⋯ ( 2 )
2
negative.
¿ ¿
Hence setting x 1=0 , we have x 2 =80
¿
And λ =20
Step -3: Check whether the constraints are satisfied
¿
i) non – negativity constraint x 1=0
¿
x 2 =80
¿
λ =20
ii) inequality constraint : x 1 + x 2 ≤80
0+ 80≤0
∂L ∂L
−x , =0→ x, =0
Step -4: check the complementary slackness condition ∂ x1 since x is zero, ∂ x 1
1
.
∂L ¿ ∂L
−x 2 , =0 , x 2=80 ⇒ =0
∂ x1 ∂ x2
180−2 x 2 −λ=0
180−2 ( 80 )−20=0
0=0
∂L ¿ ∂L
λ=0 , λ =20 ⇒ =0
- ∂L ∂λ
⇒ 80−x 1 −x 2=0
80−0−80=0
0=0
¿ ¿
* Since all the K- T conditions are satisfied, the function is maximized at x 1=0∧x 2 =80 .
Example -2
C=( x 1 −4 )2 + ( x 2−4 ) 2
Minimize
2 x1 +3 x2 ≥6
−3 x 1−2 x 2 ≥−12
x , x ≥0
Subject to: 1 2
Solution:
Step1: Formulate the Lagrange
L=( x1 −4 ) 2 + ( x 2 −4 )2 + λ1 ( 6−2 x 1−3 x 2 ) + λ2 (−12+3 x 1 +2 x 2 )
L λ2 =−12+3 x 1 +2 x 2 ¿ 0⋯⋯⋯⋯⋯( 4 )
To solve such multivariable non –linear programming problem, let us use iteration or trial &
error ), method
2
18
13 ( ) ( )
+3
36
13
≥6
−3
28
13( ) ( )
−2
36
13
≥−12