0% found this document useful (0 votes)
21 views

Chapter 4 - Constrained Optimization

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Chapter 4 - Constrained Optimization

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Chapter 4

Constrained Optimization

Introduction
In economic optimization problems, the variables involved are often required to satisfy certain
constraints. In the case of unconstrained optimization problems, no restrictions have been made
regarding the value of the choice variables. But in reality, optimization of a certain economic
function should be in line with certain resource requirement or availability. This emanates from
the problem of scarcity of resources. For example; maximization of production should be subject
to the availability of inputs. Minimization of costs should also satisfy a certain level of output.
The other constraint in economics is the non negativity restriction. Although sometimes negative
values may be admissible, most functions in economics are meaningful only in the first quadrant.
So, these constraints should be considered in the optimization problems.
Constrained optimization deals with optimization of the objective function (the function to be
optimized) subject to constraints (restrictions). In the case of linear objective and constraint
functions, the problems are solved using linear programming model. But when we face a non
linear function, we use the concept of derivatives for optimization. This chapter focuses on
optimization of non linear constrained functions.

4.1. One Variable constrained optimization

if . f ' ( x )=0 , x̄≥0


f ' ( x )≤0 }
i) Max: if . f ' ( x ) <0 , x̄=0
'
if . f ( x )=0 , x̄≥0
f ( x )≥0 }
'
'
ii) Min: if . f ( x ) >0 , x̄=0
Example:1
2
Max y=−3 x −7 x+2 , subject to , x≥0
In unconstrained operation:
'
F.O.C. f ( x )=−6 x−7=0
−7
x=
6

But imposing the non – negative constraint we have:


¿
x =0 , and f ( 0 )=2 '
f ( 0 )=−7<0

4.2. Two variables problems with equality constraint


In the case of two choice variables, optimization problem with equality constraint takes the
Max / Min : y=f ( x 1 , x2 ) , Subject to : g ( x 1 , x 2 ) =c
form x1 , x2
. This type of optimization problem is
commonly used in economics. Because, for the purpose of simplification, two variable cases
are assumed in finding optimum values. For example in maximization of utility using
indifference approach, the consumer is assumed to consume two bundles of goods.

Example: Max u ( x 1 , x 2 ) , subject to p1 x 1 + p2 x 2 =M


In this section, we will see two methods of solving two variable optimization problems with
equality constraints.
i) Direct substitution Method:
Direct substitution method is used for a two variable optimization problem with only
one constraint. It is relatively simple method. In this method, one variable is
eliminated using substitution before calculating the 1st order condition. Consider the
consumer problem in the above example.
P2 x 2 =M−P1 x 1
m p
x 2= − 1 x1
P2 p2
Now x2 is expressed as a function of x1. Substituting this value, we can eliminate x2 from the
equation.
⇒ Max , u=u ( x 1 , x 2 ( x1 ) )

du ∂u ∂u ∂ x 2
= + . =0
dx 1 ∂ x 1 ∂ x 2 ∂ x 1

=Mu 1 + Mu 2
( −p 1
p2 )
=0

p1
⇒ Mu 1 =Mu 2 .
p2
Mu 1 p 1
⇒ =
Mu 2 p 2

Example u=x1 x 2 , subject to , x 1 + 4 x 2 =120


4 x 2=120−x 1
1
x 2 =30− x1
4

⇒ u= x 1 30− ( 1
4 )
x 1 =30 x 1 −
1
4
x 21

du 1
=30− x 1
F.O.C.( First Order Condition) dx 1 2 =0

⇒ x 1 =60 , and , x 2 =15


II) Lagrange Multiplier Method
When the constraint is a complicated function or when there are several constraints, we resort
to the method of Lagrange.
Max : f ( x 1 , x 2 )
Subject to ( 1 2 )
g x , x =c

L= f ( x 1 , x 2 ) + λ ( c− g ( x 1 , x 2 ) )
L denotes the Lagrange.
An interpretation of the Lagrange Multiplier
The Lagrange multiplier, λ , measures the effect on the objective function of a one unit change
in the constant of the constraint function (stock of a resource) . To show this; given the
Lagrangian function;
L= f ( x 1 , x 2 ) + λ ( c− g ( x 1 , x 2 ) )

F.O.C.
L1 =f 1 ( x1 x 2 )−λg 1 ( x 1 , x 2 )=0
L2 =f 2 ( x 1 x 2 )−λg 2 ( x 1 , x 2 )=0
Lλ=c−g ( x 1 , x2 )=0
¿ ¿ ¿
We can express the optimal choice of variables λ , x 1 and . x2 as implicit functions of the
parameter c:
¿ ¿
x 1=x 1 ( c )
x ¿2 =x¿2 ( c )
¿ ¿
λ = λ (c )
¿ ¿ ¿
Now since the optimal value of L depends on
λ , x 1 .and .x 0 we may consider L*to be a
function of c alone. That is,

( ) + λ [ c− g ( x )] ,
( c) (c ) ( c) ( c) ( c)
L∗¿ f x¿1 , x ¿2 ¿ ¿
1 , x ¿2

Differentiating L* with respect to c, we have:


[ ]
¿ ¿ ¿ ¿
dL¿ dx 1 dx 2 ¿ dλ
¿
dx 1 dx 2
=f x +f x + [ c−g ( x 1 , x 2 ) ]
¿
+ λ¿ 1−g x −g x
dc 1 dc 2 dc dc 1 dc 2 dc

¿ ¿ ¿
dx dx dλ
=( f x −gx 1 λ g x ) 1 + ( f x − λ¿ g x ) 2 + [ c−g ( x¿1 , x¿2 ) ]
¿
+ λ¿
1 dc1 2 2 dc dc
The expressions with a sign of cancellation are all equal to zero.

¿
dL ¿

dc
Note:- If λ> 0 , it means that for every one unit increase ( decrease ) in the constant of the
constraining function, the objective function will decrease ( increase by a value approximately
equal to λ .

If λ< 0 , a unit increase (decrease) in the constant of the constraint will lead to an increase
(decrease ) in the value of the objective function by approximately equal to λ .
S.O.C. Conditions for a constrained optimization problem
Max :f ( x , y ) subject to, g ( x , y )
L=f ( x , y ) +λg ( x , y )
F.O.C.
L λ=g ( x , y )=0
Lx =f x + λ 2 x =0 |
L y =f y + λ2 y =0
L λλ =0 , L λx =g x , L λ =g y
S.O.C: y

L xx =f xx +λg xx , Lxy =f xy +λg xy

L yy =f yy +λg yy
In matrix form:

( )( )
Lλλ Lλx L λy 0 gx gy
¿ Lxλ Lxx L xy = g x L xx Lxy
L yλ L yx L yy gy L yx L yy
This matrix is called Bordered Hessian and its determinant is denoted by|H̄|.
L L
|Η|=| xx xy |
The bordered Hessian |H̄| is simply the plain Hessian
L yx L yy bordered by the
first order derivatives of the constraint with zero on the principal diagonal.
2
Determinant Criterion for the sign definiteness of d z .
0 gx g y
2
d z .is {
Positive . definite
Negative . definite }
subject .to . dg=0 . iff| g x L xx L xy |
g y L yx L yy
¿ 0→ min
¿ 0→ Max

n- variable case
f ( x1 , x 2 ⋯⋯x n )
Given
g ( x 1 , x 2 ⋯⋯ x n ) =c
Subject to
0 g1 g2 ⋯⋯ g n
g L L12 L1n
|H̄|=| 1 11 |

gn Ln 1 K n2 Lnn Its bordered leading principal minors can be defined as:
0 g1 g 2 g3
0 g1 g2 g L11 L12 L13
=| 1 |
|H̄ 2|=|g 1 L11 L12|,|H̄ 3| g 2 L21 L22 L23
g 2 L21 L22 g 3 L31 L32 L33 etc.
Conditions Maximization Minimization

F.O.C.
L λ=L1 =L2 ⋯⋯ Ln = 0 The same i.e.
L λ=L1 =L2 ⋯⋯ Ln = 0

S.O.C.
|H̄ 2|>0,|H̄ 3|<0| , ||H̄ 2|,|H̄ 3||⋯⋯|H̄ n<0|
|H̄ 4|>0 ,⋯⋯ (−1 )n|H̄ n|>0

Example: Max : z ( x , y )=2 xy subject to 3 x+4 y=90


L=2 xy +λ ( 90−3 x−4 y )
The subsequent equations can be solved by either of the following ways.
a) direct substitution
b) inverse method
c) Cramer’s rule
d) Gauss-Jordan Elimination Method
L λ=90−3 x−4 y=0 x ¿ −15
¿
L x =2 y −3 λ=0 | y =11. 25
¿
L y =2 x−4 λ=0 λ =7 .5
F.O.C:-

( )
0 g1 g2 0 3 4
g1 L xx L xy ⇒|H̄|=|3 0 2|
g2 L yx L yy 4 2 0
S.O.C:
|H̄|=|H̄ 2|=48>0→ Negative definite ( Max )

More than one Equality constraint


Max/ Min :f ( x 1 , x 2 , x 3 ) , subject to , g1 ( x 1 , x 2 , x3 , )=c 1 and

g2 ( x 1 , x 2 , x 3 ) =c 2

L=f ( x 1 , x 2 , x 3 ) + λ1 ( c 1 −g 1 ( x 1 , x 2 x 3 ) )+ λ2 ( C2 −g2 ( x1 , x 2 x 3 ) )
1 2
F.O.C.: L1 =f 1 −λ 1 g 1− λ2 g1 =0
L2 =f 2 −λ 1 g 12− λ2 g22 =0

L3 =f 3− λ1 g13 −λ 2 g 23=0
1 1
L λ =C − g ( x 1 , x 2 , x 3 ) =0
1

2 2
L λ =C − g ( x 1 , x 2 , x 3 ) =0
2

0 0 g11 g12 g 13
0 0 g21 g22 g 23
H̄=| g11 g 21 L11 L12 L13 |
g12 g 22 L21 L22 L23

S.O.C. g 13 g32 L31 L32 L33

|H̄ 2|= is one that contains L as the last element of its principal diagonal
22

|H̄ 3 =|is one that contains L as the last element of its principal diagonal.
33
A Jacobian Determinant
A Jacobian determinant permits testing for functional dependency for linear & non – linear
functions. A Jacobian determinant is composed of all the first order partial derivatives of a
system of equations arranged in ordered sequence.

y 1 =f 1 ( x 1 , x 2 , x 3 )
Example given
y 2 =f 2 ( x 1 , x 2 , x 3 )

y 3 =f 3 ( x 1 , x 2 , x 3 )
∂ y1 ∂ y1 ∂ y1
∂ x1 ∂ x2 ∂ x3
∂y ∂ y2 ∂ y2
|J|=| 2 |
∂ x1 ∂ x2 ∂ x3
∂ y3 ∂ y3 ∂ y3
∂ x1 ∂ x2 ∂ x3

- If |J|=0, then there is functional dependency among equations and hence no unique
solution exists.
∂ y1 ∂ y1
y 12 means y 11 means
E.g y 1 =4 x 1 −x 2 ⇒ y 11 =4∧ y 12 =−1 ∂ x 2 and ∂ x1

∂ y2
y 2 =16 x 21 +8 x 1 x 2 + x 22 ⇒ =32 x 1 +8 x 2
∂ x1
∂ y2
=8 x 1 + 2 x 2
∂ x2

4 −1
|J|=| |=64 x 1+16 x 2 ≠0,
32 x 1 +8 x 2 8 x 1 2 x 2 thus, the system has a solution because the value
of the Jacobian determinant is non-singular, i.e., different from zero.
As stated in the preceding paragraphs, partial derivatives can provide a means of testing
whether there exists functional (linear or nonlinear) dependence among a set of n functions in
n variables. This is related to the notion of Jacobian determinant.
Exercise
2 2 2
1. Optimize: y=3 x 1 −5 x1 −x 1 x 2 +6 x 2 −4 x 2 +2 x 2 x 3 +4 x 3 +2 x 3−3 x 1 x 3 , using.
a) The Cramer’s rule for the first order condition.
b) The Hessian for the S.O.C.
2. A monopolistic firm produced two related goods. The demand and total cost functions
are:
Q1 =100−2 p1 +2 p2
Q2 =75+0 . 5 p1 − p2

TC=Q 21 +2 Q 1 Q 2 + Q 22
Optimize the firms profit function by:
a) Finding the inverse demand function.
b) Using cramer’s rule for the f.o.c.
c) Using the Hessian for the s.o.c.
3. Maximize a firm’s total cost:

c=45 x 2 +90 xy+90 y 2 , when the firm has to meet a production quota: 2 x+3 y =60 , by:
a) finding the critical values
b) using the Bordered Hessian to test the S.O.C.
4. A monopolist firm produces two substitute goods and faces the following demand
functions.
X =80−P x +2 p y
y=110+2 p x −6 p y
2 2
Its total cost function is : TC=0 .5 x +xy + y what is the profit maximizing level of

output if the firm has to meet a production quota of 3 x+4 y =350 ?


0 . 25 0 . 4
5. Maximize utility: u ( x, y )= x y , subject to , 2 x+8 y=104

4.3. Inequality Constraints & The theorem of Kuhn and Tucker


Inequality constraints are those requiring certain variables to be non-negative. These often
have to be imposed in order that the solution should make economic sense. In addition,
bounds on the availability of resources are often expressed as inequalities rather than
equalities. Consider the following examples.

Ex-1: Max : Π=5 x 1 +3 x 2 , Π stands for profit.

}
6 x 1 +2 x 2≤36
5 x 1 +5 x 2≤40
subject to
2 x1 + 4 x 2 ≤28
x1 , x 2 ≥0
Here we add slack variables to make the constraint binding
(active)

}
2 x 1 + x 2 ≥14
x + x ≥12
Min: C=2 x 1 + 4 x 2 : subject to 1 2
x 1 +3 x 2≥18
x 1 , x 2≥0
Example-2: Here we subtract surplus values
to make the constant binding. C denotes cost.

Note :- If at optimality the slack/surplus of a constraint is zero, then the constraint is said to
be active or binding. Otherwise, it is inactive.
Two variable, one constraint case
max Π=f ( x 1 , x2 ) , subject to , g ( x 1 , x 2 ) ≤r

x 1 , x2 ≥0
And slack variable to change the constraint into strict equality, thus .
Max Π =f ( x1 , x 2 ) , subject to , g ( x 1 , x 2 ) + s=r

x 1 , x2 , s≥0
Step1: Form the Lagrangian function
L= f ( x 1 , x 2 ) + λ [ r− g ( x 1 , x2 )− s ]

Step2:Find the first order partials with respect to each of the choice variables and set them equal
to zero.
∂L
L x ≤0 , x 1 ¿ 0 . and . x 1 =0⋯⋯ ( 1 )
1 ∂ x1
∂L
L x ≤0 , x 2 ¿ ,and . x 1 =0⋯⋯( 2 )
2 ∂ x2
∂L
Ls ≤0 , s≥0 , and . s =0⋯⋯⋯( 3 )
∂s
L λ=r−g ( x1 , x 2 ) −s=0⋯⋯⋯ ( 4 )
Now, we can consolidate equation (3) and (4) and in the process, eliminate the dummy variables.
∂L
=−λ=
i.e. ∂ s , thus equation (3) tells us that −λ≤0 , s≥0 ,and −λs=0 , or equivalently
λ≥0 , s≥0 ,and . λs=0⋯⋯⋯ (5 )
 From equation(4) we have
s=r−g ( x 1 , x 2 )

 substituting (4) in (5) we have


λ≥0 , r− g ( x 1 , x 2 ) ≥0 , and . λ [ r− g ( x 1 x 2 ) ]=0

- thus, the Kuhn-Tucker( K-T) conditions for maximization problem with inequality
constraint are:
∂ L1 ∂L
=f −λg1 ≤0 , x1 ≥0 , and . x 1 =0
∂ x1 1 ∂ x1
∂ L2 ∂L
=f 2 −λg2 ≤0 , x 2≥0 , and . x 2 =0
∂ x2 ∂ x2
∂L
=r−g ( x1 , x 2 ) ≥0 , λ≥0 , and . λ [ r−g ( x 1 , x 2 ) ] =0
∂λ
- Kuhn-Tucker(K-T) conditions for minimization problem with inequality constraint:
L x =f 1− λg1 ¿ 0 , x 1 ¿ 0 , and . x 1 Lx1 =0
1

L x =f 2− λg2 ¿ 0 , x 2 ¿ 0 , and . x2 Lx 2 =0
2

L λ=r −g ( x1 , x 2 ) ≤0 , λ≥0 , and . λL λ=0

N.B! For n- variable, m-constraint case, please read A.C . Chiang]


Example- 1
max y=10 x 1 −x12 +180 x 2−x 22 , subject to , x 1 + x 2≤80
x 1 , x2 ≥0
Step -1:- Form the Lagrangian function
L=10 x 1 −x21 +180 x2 −x 22 + λ ( 80−x 1 −x 2 )

Step -2: solve for the first order condition (F.O.C.) and set the partials with respect to each of
the choice variables equal to zero.
L x =10−2 x 2 −λ≤0⋯⋯⋯⋯( 1 )
1

L x =180−2 x 2 −λ≤0⋯⋯⋯ ( 2 )
2

L λ=80−x 1−x 2 =0⋯⋯⋯⋯( 3 )

From (1) & (2) we have, x 2 =x1 + 85⋯⋯⋯ ( 4 )


−5
x 1=
 Substituting (4) in (3), we have 2 (not feasible) because the x’s should be non-

negative.
¿ ¿
 Hence setting x 1=0 , we have x 2 =80
¿
And λ =20
Step -3: Check whether the constraints are satisfied
¿
i) non – negativity constraint x 1=0
¿
x 2 =80
¿
λ =20
ii) inequality constraint : x 1 + x 2 ≤80
0+ 80≤0
∂L ∂L
−x , =0→ x, =0
Step -4: check the complementary slackness condition ∂ x1 since x is zero, ∂ x 1
1

.
∂L ¿ ∂L
−x 2 , =0 , x 2=80 ⇒ =0
∂ x1 ∂ x2
180−2 x 2 −λ=0

180−2 ( 80 )−20=0
0=0
∂L ¿ ∂L
λ=0 , λ =20 ⇒ =0
- ∂L ∂λ
⇒ 80−x 1 −x 2=0
80−0−80=0
0=0
¿ ¿
* Since all the K- T conditions are satisfied, the function is maximized at x 1=0∧x 2 =80 .
Example -2
C=( x 1 −4 )2 + ( x 2−4 ) 2
Minimize
2 x1 +3 x2 ≥6
−3 x 1−2 x 2 ≥−12
x , x ≥0
Subject to: 1 2
Solution:
Step1: Formulate the Lagrange
L=( x1 −4 ) 2 + ( x 2 −4 )2 + λ1 ( 6−2 x 1−3 x 2 ) + λ2 (−12+3 x 1 +2 x 2 )

F.O.C.(First Order Condition)


L1 =2 ( x1 −4 ) −2 λ1 +3 λ 2≥0⋯⋯⋯⋯( 1 )
L2 =2 ( x 2−4 ) −3 λ1 + 2 λ2 ≥0⋯⋯⋯ ( 2 )
L λ =6−2 x 1 −3 x2 ¿ 0⋯⋯⋯⋯⋯⋯ (3 )
1

L λ2 =−12+3 x 1 +2 x 2 ¿ 0⋯⋯⋯⋯⋯( 4 )

To solve such multivariable non –linear programming problem, let us use iteration or trial &
error ), method

Iteration-1 λ 1=0 , and . λ2 > 0


∂L ∂L
Thus we have ∂ λ 1 =0 and ∂ λ 2 =0 i.e. 2 x1 +3 x 2 =6 and 3 x 1 +2 x2 =12
24 −6
x 1= and x 2=
Solving simultaneously yields 5 5 (this is not feasible).
From equation (1) & (2), we have:
2 x1 −2 λ1 +3 λ2 =8 , and ⋯⋯⋯⋯⋯ ( 5 )
2 x 2−3 λ 1 +2 λ2 =8⋯⋯⋯⋯⋯⋯⋯( 6 )
∂L
λ 2 >0 ⇒ =3 x 1 +2 x 2=12⋯⋯⋯( 7 )
And ∂ λ2

⇒ solving equations (5) , (6),and (7) assuming λ 1=0 , yields


¿ 28
x 1=
13
¿ 36
λ 2=
13
λ 1=0
16
λ 2=
13
 check whether the constraints are satisfied
i) Non – negative constraint  satisfied
ii) Inequality constraint:

2
18
13 ( ) ( )
+3
36
13
≥6

−3
28
13( ) ( )
−2
36
13
≥−12

 Check the complementary slackness conditions. This is satisfied.


* Therefore, since the solution values for the four variables satisfy all the K-T conditions, they
are acceptable as the final solutions.

You might also like