0% found this document useful (0 votes)
608 views36 pages

LP 2

The Big-M method is introduced as a technique for solving linear programming problems with less than m basic variables. It involves introducing artificial variables with a large negative coefficient M in the objective function. This transforms the problem into an equivalent problem with m basic variables. The method is demonstrated on an example problem, which is transformed into standard form by introducing slack and artificial variables. The problem is then solved using the simplex method. Exercises are provided to solve additional problems using this Big-M method.

Uploaded by

Hisham Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
608 views36 pages

LP 2

The Big-M method is introduced as a technique for solving linear programming problems with less than m basic variables. It involves introducing artificial variables with a large negative coefficient M in the objective function. This transforms the problem into an equivalent problem with m basic variables. The method is demonstrated on an example problem, which is transformed into standard form by introducing slack and artificial variables. The problem is then solved using the simplex method. Exercises are provided to solve additional problems using this Big-M method.

Uploaded by

Hisham Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter 4

The Big - M Method

4.1 Introduction
So far during the finding of an optimal solution to a maximization problem using the primal
simplex method the assumption that there exists m basis vectors has been used. suppose we
have less than m vectors then we introduce another variable to make them m.
Definition 4.1.1 A technique for solving linear programming problems in which artificial
variables are assigned cost coefficients which are a very large number M , say, M = 1035 .

4.2 The method


In the maximization process is already in its standard form
Max z = cx (4.1)

subject to Ax = b
and x ≥ 0
With
(i) Basic variables not feasible (RHS=-ve), which results into
(ii) less than m basic variables in an m constraint problem (since with a negative in a standard
form, not feasible, multiply by a negative-when already have the equal sign- makes what
had been a basic variable no longer, since coefficient 1 will now be −1).
Let the problem be transformed to,
Max z = cx − M S (4.2)

subject to Ax + S ≤ b
and x, S ≥ 0
We add artificial variables si to have more Basic variables, and subtract them from the objective
but all with coefficient −M on each. e.g
Max z = cx − M s1 − M s2 − M s3

Solve equation (13) and if at the end

73
4.2. THE METHOD

(a) all s = 0 in the solution then an optimal solution to problem (12) is found.

(b) atleast one sj > 0 in the final solution then no solution exists to the original problem (12)
ie the solution space is undefined.

Definition 4.2.1 The variable introduced in the Lp problem (13) is called the Artificial
variable.
Note 4.2.1 M is relatively large so that it will be larger than any number it will be compared
with during the carrying out of the simplex algorithm computations.
Example 4.2.1

max z = 2x1 + 5x2


subject to 2x1 − x2 ≤ 8
6x1 − 4x2 ≥ 24
x1 , x2 ≥ 0

Solution

(a) Convert the LP problem into canonical form by supplying slack variables.

max z = 2x1 + 5x2 + 0x3 + 0x4


subject to 2x1 − x2 ≤ 8
−6x1 + 4x2 ≤ −24
x1 , x2 ≥ 0

(b) Convert the LP problem into standard form by supplying slack variables.

max z = 2x1 + 5x2 + 0x3 + 0x4


subject to 2x1 − x2 + x3 = 8
−6x1 + 4x2 + x4 = −24
x1 , x2 , x3 , x4 ≥ 0

(c) RHS = +ve.

max z = 2x1 + 5x2 + 0x3 + 0x4


subject to 2x1 − x2 + x3 = 8
6x1 − 4x2 − x4 = 24
x1 , x2 , x3 , x4 ≥ 0

(d) Now that one of the equations has no basic variable, we introduce an artificial variable
x5 .

max z = 2x1 + 5x2 + 0x3 + 0x4 − M x5


subject to 2x1 − x2 + x3 = 8
6x1 − 4x2 − x4 + x5 = 24
x1 , x2 , x3 , x4 , x5 ≥ 0

we put the information in the tableaux as follows,

Page 74 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

cj 2 5 0 0 -M 0
cBj xB x1 x2 x3 x4 x5 x̄B
0 x3 2 -1 1 0 0 8
-M x5 6 -4 0 -1 1 24
zj − cj -6M - 2 4M - 5 0 M 0 -24M

The last (and third) tableau will generate the optimal solution is
x?1 = 4, x?2 = 0, x?3 = 0, x?4 = 0, x?5 = 0 and z = 8

Solve the following Lp problems using the Big - M method that’s by the use of artificial variables.

Exercise 4.1

max z = 3x1 − x2
subject to x1 + x2 ≥ 1
x1 + 2x2 ≤ 8
x1 ≤ 2
x1 , x2 ≥ 0

Exercise 4.2

max z = x1 + 2x2 + 3x3


subject to 2x1 + x2 + 4x3 ≤ 15
3x1 − 4x2 + 5x3 ≤ −12
4x1 + 5x2 − 6x3 ≤ 9
x1 , x2 , x3 ≥ 0

D.W-Ddumba linear programming Page 75 of 186


4.2. THE METHOD

Example 4.2.2

max z = 3x1 + 4x2


subject to 2x1 + x2 ≤ 600
x1 + x2 ≤ 225
5x1 + 4x2 ≤ 1000
x1 + 2x2 ≥ 150
x1 , x2 ≥ 0

We change into a standard form via the canonical form

max z = 3x1 + 4x2


subject to 2x1 + x2 + x3 = 600
x1 + x2 + x4 = 225
5x1 + 4x2 + x5 = 1000
x1 + 2x2 − x6 = 150
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0

Not in standard form because there is no basic variable in the fourth equation. Therefore we
add an artificial variable to that equation (s1 ) and give it a larger negative coefficient in the
objective function, to penalize it:

max z = 3x1 + 4x2 − M s1


subject to 2x1 + x2 + x3 = 600
x1 + x2 + x4 = 225
5x1 + 4x2 + x5 = 1000
x1 + 2x2 − x6 + s1 = 150
x1 , x2 , x3 , x4 , x5 , x6 , s1 ≥ 0

cj 3 4 0 0 0 0 -M 0
cBj xB x1 x2 x3 x4 x5 x6 s1 x̄B
0 x3 2 1 1 0 0 0 0 600
0 x4 1 1 0 1 0 0 0 225
0 x5 5 4 0 0 1 0 0 1000
−M s1 1 2? 0 0 0 -1 1 150
zj − cj −3 − M −4 − 2M 0 0 0 M 0 -150M
The minimum in zj − cj is −4 − 2M : Then 600 3
= 200, 125
1
= 125, 1000
4
= 250, 150
2
= 75
The minimum is the last, thus 2 is our pivot.
cj 3 4 0 0 0 0 -M 0
cBj xB x1 x2 x3 x4 x5 x6 s1 x̄B
0 x3 1.5 0 1 0 0 0.5 -0.5 525
0 x4 0.5 0 0 1 0 0.5? -0.5 150
0 x5 3 0 0 0 1 2 -2 700
4 x2 0.5 1 0 0 0 -0.5 0.5 75
zj − cj -1 0 0 0 0 -2 ? 300

Page 76 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

Which will finally give the optimal tableau


cj 3 4 0 0 0 0 -M 0
cBj xB x1 x2 x3 x4 x5 x6 s1 x̄B
0 x3 1 0 1 -1 0 0 ? 375
0 x6 1 0 0 2 0 1 ? 300
0 x5 1 0 0 -4 1 0 ? 100
4 x2 1 1 0 1 0 0 ? 225
zj − cj 1 0 0 4 0 0 ? 900
Optimal solution is
x?1 = 0, x?2 = 225, z ? = 900

Note 4.2.2 If the artificial variable si = 0 after solving, then the LP has a solution since
would be similar to the original problem before introducing the M in the objective. If si 6= 0
for any i, then solution does not exist, so Big M method fails.
Note 4.2.3 Minimizing with Big M is complicated, need to first put into maximize.
Example 4.2.3 Maximize the Objective Function (P)

P = 15x1 + 10x2 + 17x3

subject to

1.0x1 + 1.0x2 + 1.0x3 ≤ 85


1.25x1 + 0.5x2 + 1.0x3 ≥ 90
0.6x1 + 1.0x2 + 0.5x3 = 55
x1 , x2 , x3 ≥ 0

x1 = 50, x2 = 15, x3 = 20, p = 1240

Class 4.2.1

min z = 4x1 + x2
subject to 3x1 + x2 = 3
4x1 + 3x2 ≥ 6
x1 + 2x2 ≤ 4
x1 , x2 ≥ 0

We can only solve a max problem

max π = −4x1 − x2
subject to 3x1 + x2 = 3
4x1 + 3x2 ≥ 6
x1 + 2x2 ≤ 4
x1 , x2 ≥ 0

D.W-Ddumba linear programming Page 77 of 186


4.2. THE METHOD

Step(a): canonical form the problem:

max π = −4x1 − x2
subject to 3x1 + x2 = 3
−4x1 − 3x2 ≤ −6
x1 + 2x2 ≤ 4
x1 , x2 ≥ 0

Step(b): standard form:

max π = −4x1 − x2
subject to 3x1 + x2 = 3
−4x1 − 3x2 + s1 = −6
x1 + 2x2 + s2 = 4
x1 , x2 ≥ 0

Step(c): RHS = +ve:

max π = −4x1 − x2
subject to 3x1 + x2 = 3
4x1 + 3x2 − s1 = 6
x1 + 2x2 +s2 = 4
x1 , x2 ≥ 0

Step(d): Basic variables.


There are only one Basic variable yet need three, since three constraints. So add some
artificial variables.

max π = −4x1 − x2 − M R1 − M R2
subject to 3x1 + x2 +R1 = 3
4x1 + 3x2 − s1 +R2 = 6
x1 + 2x2 +s2 = 4
x1 , x2 ≥ 0

Step(e): Enter Tableau.

Page 78 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

cj -4 -1 0 -M -M 0 0
cBj xB x1 x2 s1 R1 R2 s2 x̄B
-M R1 3? 1 0 1 0 0 3
-M R2 4 3 -1 0 1 0 6
0 s2 1 2 0 0 0 1 4
zj − cj -7M+4 -4M+1 M 0 0 0 -9M
1 1
-4 x1 1 3
0 3
0 0 1
5? −4
-M R2 0 3
-1 3
1 0 2
5 −1
0 s2 0 3
0 3
0 1 3
−5M −1 7M −4
zj − cj 0 3
M 3
0 0 -2M-4
1 3
-4 x1 1 0 5 5
−3 −4 3 6
-1 x2 0 1 5 5 5
0 5
?
0 s2 0 0 1 -1 1 1
−1 −18
zj − cj 0 0 5
∗ ∗ 0 5
2
-4 x1 5
9
-1 x2 5
0 s1 1
1 −17
zj − cj 0 0 0 ∗ ∗ 5 5

2 9 −17
   
Which gives x1 , x2 , s1 , s2 , R1 , R2 ; maxπ = 5
, 5
, 1, 0, 0, 0; 5
:

17
minz =
5

Example 4.2.4

Max p = 2x1 + x2
subject to 3x1 − x2 ≥ 6
2x1 + 3x2 ≤ 12
x1 , x2 ≥ 0

Step(a): canonical form the problem:

Max p = 2x1 + x2
subject to −3x1 + x2 ≤ −6
2x1 + 3x2 ≤ 12
x1 , x2 ≥ 0

Step(b): standard form:

Max p = 2x1 + x2
subject to −3x1 + x2 + x3 = −6
2x1 + 3x2 + x4 = 12
x1 , x2 , x3 , x4 ≥ 0

D.W-Ddumba linear programming Page 79 of 186


4.2. THE METHOD

Step(c): RHS = +ve:

Max p = 2x1 + x2
subject to 3x1 − x2 − x3 = 6
2x1 + 3x2 +x4 = 12
x1 , x2 , x3 , x4 ≥ 0

Step(d): Basic variables.


Since only x4 is the basic variable and basic variable should equal to number of constraints
(3), we force in other variables, (the artificial variables), which cause a penalty M, as you
are sinking in more money in the business, not planned for.

Max p = 2x1 + x2 − M s1
subject to 3x1 − x2 − x3 +s1 = 6
2x1 + 3x2 +x4 = 12
x1 , x2 , x3 , x4 ≥ 0

Step(e): Enter Tableau.

cj 2 1 0 0 -M 0
cBj xB x1 x2 x3 x4 s1 x̄B
-M s1 3? -1 -1 0 1 6
0 x4 2 3 0 1 0 12
zj − cj -3M-2 M-1 M 0 0 -6M
−1 −1 1
2 x1 1 3 3
0 3
2
11 ? 2 −2
0 x4 0 3 3
1 3
8
−5 −2 2
zj − cj 0 3 3
0 3
+M 4
−3 1 3 30
2 x1 1 0 11 11 11 11
2 ? 3 −2 24
1 x2 0 1 11 11 11 11
−4 5 4 84
zj − cj 0 0 11 11 11
+M 11
3 1
2 x1 1 2
0 2
0 6
11 3
0 x3 0 2
1 2
-1 12
zj − cj 0 2 0 1 M 12
   
Which gives x1 x2 x3 x4 s 1 p = 6 0 12 0 0 12 :

Example 4.2.5
A construction site requires a minimum of 10, 000cu meters of sand and gravel mixture. The
mixture must contain no less than 5, 000cu meters of sand and no more than 6, 000cu meters
of gravel. Materials may be obtained from two sites: 30% of sand and 70% of gravel from site
1, at a delivery cost of $5 per cu meter, and 60% of sand and 40% of gravel from site 2, at a
delivery cost of $7.00 per cu meter.

(a) Formulate the problem as a LP model.

(b) Solve using linear programming and hand calculations.

Page 80 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

min z = 5x1 + 7x2


subject to 0.3x1 + 0.6x2 ≥ 5000
0.7x1 + 0.4x2 ≤ 6000
x1 + x2 ≥ 10, 000
x1 , x2 ≥ 0

Cant solve unless it is a Max problem.

Max π = −5x1 − 7x2


subject to 0.3x1 + 0.6x2 ≥ 5000
0.7x1 + 0.4x2 ≤ 6000
x1 + x2 ≥ 10, 000
x1 , x2 ≥ 0

Step(a): canonical form the problem:

Max π = −5x1 − 7x2


subject to −0.3x1 − 0.6x2 ≤ −5000
0.7x1 + 0.4x2 ≤ 6000
−x1 − x2 ≤ −10, 000
x1 , x2 ≥ 0

Step(b): standard form:

Max π = −5x1 − 7x2


subject to −0.3x1 − 0.6x2 + x3 = −5000
0.7x1 + 0.4x2 + x4 = 6000
−x1 − x2 + x5 = −10, 000
x1 , x2 , x3 , x4 , x5 ≥ 0

Step(c): RHS = +ve:

Max π = −5x1 − 7x2


subject to 0.3x1 + 0.6x2 − x3 = 5000
0.7x1 + 0.4x2 +x4 = 6000
x1 + x2 − x5 = 10, 000
x1 , x2 , x3 , x4 , x5 ≥ 0

Step(d): Basic variables.


Since only x4 is the basic variable and basic variable should equal to number of constraints

D.W-Ddumba linear programming Page 81 of 186


4.2. THE METHOD

(3), we force in other variables, (the artificial variables), which cause a penalty M, as you
are sinking in more money in the business, not planned for.

Max π = −5x1 − 7x2 − M s1 − M s2


subject to 0.3x1 + 0.6x2 − x3 +s1 = 5000
0.7x1 + 0.4x2 +x4 = 6000
x1 + x2 − x5 + s2 = 10, 000
x1 , x2 , x3 , x4 , x5 , s1 , s2 ≥ 0

Step(e): Enter Tableau.


   
x 1 x2 x3 x4 x5 s 1 s 2 π = 0 10, 000 1, 000 2, 000 0 0 0 0 −70, 000
max π = −70, 000, thus minZ = 70, 000.

Example 4.2.6

min z = 2x1 − 3x2 + x3


subject to 3x1 − 2x2 + x3 ≤ 5
x1 + 3x2 − 4x3 ≤ 9
x2 + 5x3 ≥ 1
x1 + x2 + x3 = 6
x1 , x2 , x3 ≥ 0

33 9 −90
   
x 1 x 2 x3 z = 0 7 7 7

Example 4.2.7

min z = 2x1 + 3x2


1 1
subject to x1 + x2 ≤ 4
2 4
x1 + 3x2 ≥ 20
x1 + x2 = 10
x1 , x2 ≥ 0

   
x1 x2 min z = 5 5 25

Example 4.2.8

max w = x1 + 5x2
subject to 3x1 + 4x2 ≤ 6
x1 + 3x2 ≥ 2
x1 , x2 ≥ 0

3 15
   
x1 x2 max w = 0 2 2

Page 82 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

Example 4.2.9

min π = 2x1 − 3x2 + x3


subject to 3x1 − 2x2 + x3 ≤ 5
x1 + 3x2 − 4x3 ≤ 9
x2 + 5x3 ≥ 1
x1 + x2 + x3 = 6
x1 , x2 , x3 ≥ 0

Cant solve a min problem, we first change it to max.

max p = −2x1 + 3x2 − x3


subject to 3x1 − 2x2 + x3 ≤ 5
x1 + 3x2 − 4x3 ≤ 9
x2 + 5x3 ≥ 1
x1 + x2 + x3 = 6
x1 , x2 , x3 ≥ 0

Step(a): canonical form the problem:

max p = −2x1 + 3x2 − x3


subject to 3x1 − 2x2 + x3 ≤ 5
x1 + 3x2 − 4x3 ≤ 9
−x2 − 5x3 ≤ −1
x1 + x2 + x3 = 6
x1 , x2 , x3 ≥ 0

Step(b): standard form:

max p = −2x1 + 3x2 − x3


subject to 3x1 − 2x2 + x3 + s1 = 5
x1 + 3x2 − 4x3 + s2 = 9
−x2 − 5x3 + s3 = −1
x1 + x2 + x3 = 6
x1 , x2 , x3 ≥ 0

Step(c): RHS = +ve:

D.W-Ddumba linear programming Page 83 of 186


4.2. THE METHOD

max p = −2x1 + 3x2 − x3


subject to 3x1 − 2x2 + x3 + s1 = 5
x1 + 3x2 − 4x3 + s2 = 9
x2 + 5x3 − s3 = 1
x1 + x2 + x3 = 6
x1 , x2 , x3 ≥ 0

Step(d): Basic variables.


Realize that constraint 3 and 4 have no basic variables.

max p = −2x1 + 3x2 − x3 − M s4 − M s5


subject to 3x1 − 2x2 + x3 +s1 = 5
x1 + 3x2 − 4x3 +s2 = 9
x2 + 5x3 − s3 +s4 = 1
x1 + x2 + x3 +s5 = 6
x1 , x2 , x3 ≥ 0

With s4 and s5 just added in as artificial variables.

Step(e): Enter Tableau.


33 9
− 90 90
   
x1 x2 x3 max p min π = 0 7 7 7 7

Example 4.2.10

max z = 3x1 + x2 − x3
subject to x1 + x2 + x3 = 10
2x1 − x2 ≥ 2
x1 − 2x2 + x3 ≤ 6
x1 , x2 , x3 ≥ 0

Which will give the final stage as

max z = 3x1 + x2 − x3 − M s1 − M s2
subject to x1 + x2 + x3 +s1 = 10
2x1 − x2 − x4 +s2 = 2
x1 − 2x2 + x3 +x5 = 6
x1 , x2 , x3 ≥ 0

[Step(e):] Enter Tableau.

Page 84 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

cj 3 1 -1 0 0 -M -M 0
cBj xB x1 x2 x3 x4 x5 s1 s2 x̄B
-M s1 1 1 1 0 0 1 0 10
-M s2 2? -1 0 -1 0 0 1 2
0 x5 1 -2 1 0 1 0 0 6
zj − cj -3M-3 -1 -M+1 M 0 0 0 -12M
3? 1 −1
-M s1 0 2
1 2
0 1 2
9
−1 −1 1
3 x1 1 2
0 2
0 0 2
1
−3 1 −1
0 x5 0 2
1 2
1 0 2
5
−3M −5 −M −3 M +2
zj − cj 0 2
-M+1 2
0 0 2
-9M+3
2 1 2 −1
1 x2 0 1 3 3
0 3 3
6
1 −1 1
3 x1 1 0 3 3
0 3
∗ 4
?
0 x5 0 0 2 1 1 ∗ ∗ 14
8 −2
zj − cj 0 0 3 3
0 ∗ ∗ 18
−1 41
1 x2 0 1 0 0 3
* * 3
1 26
3 x1 1 0 1 0 3
* * 3
0 x4 0 0 2 1 1 * ∗ 14
2 82
zj − cj 0 0 4 0 3
∗ ∗ 3

26 41
0 14 0 82
   
Which gives x1 x2 x3 x4 x5 p = 3 3 3
:

Note 4.2.4 : If an artificial variable leaves(exits) the basis vector, it will never come back

Example 4.2.11

min z = 3x1 + x2 − x3
subject to x1 + x2 + x3 = 10
2x1 − x2 ≥ 2
x1 − 2x2 + x3 ≤ 6
x1 , x2 , x3 ≥ 0

4 5
   
x1 , x 2 , x 3 , z = 3
, 3
, 7, −0.66667

Example 4.2.12 Consider the following linear programming problem.

min p = 6x1 + 7x2 + x3 + 15x4


subject to x1 + x2 + 3x4 = 6
x1 + x2 − x3 + 2x4 ≥ 5
x1 , x2 , x3 , x4 ≥ 0

(a) Apply a suitable simplex method (Big M) to show that the optimal value is 33, and give
values of the variables at which this is attained. Describe briefly the reasoning behind
each step. [When you have a free choice of the pivots, you are recommended to choose
the one furthest to the left].

(b) Give the optimal value for the dual, and explain your reasoning:[You are not expected to
solve the dual problem]

D.W-Ddumba linear programming Page 85 of 186


4.2. THE METHOD

Exercise 4.3

minimize = 0.4x1 + 0.5x2


subject to 0.3x1 + 0.1x2 ≤ 2.7
0.5x1 + 0.5x2 = 6
0.6x1 + 0.4x2 ≥ 6
x1 , x2 ≥ 0

3
   
x1 x2 x3 s 1 s 2 = 6 6 10
0 0
Why do you think primal-simplex could not solve the problem? Use the geometrical method
to solve the problem.

Example 4.2.13

max π = 3x1 + x3
subject to x1 + 2x2 + x3 = 10
x1 − 2x2 + 2x3 = 6
x1 , x2 , x3 ≥ 0

Realize that already in standard form, RHS=+ves, but fewer basic variables, thus Primal-
simplex cant solve this, may be Big M, by adding in some artificial variables to have the
required number of basic variables.

max π = 3x1 + x3 − M s1 − M s2
subject to x1 + 2x2 + x3 + s1 = 10
x1 − 2x2 + 2x3 + s2 = 6
x1 , x2 , x3 , s1 , s2 ≥ 0

cj 3 0 1 -M -M 0
cBj xB x1 x2 x3 s1 s2 x̄B
-M s1 1 2 1 1 0 10
?
-M s2 1 -2 2 0 1 6
zj − cj -2M-3 0 -3M-1 0 0 -16M
1 −1
-M s1 2
3? 0 1 2
7
1 1
1 x3 2
-1 1 0 2
3
1 1 1
zj − cj 2
(−M − 5) 2
(−6M − 1) 0 0 2
(3M + 1) -7M+3
1 1 −1 7
0 x2 6
1 0 3 6 3
2 1 1 1
1 x3 3
0 1 3 3 6
16
zj − cj 0 0 0 1 1 3
   
x1 0
 x2   7 
  3 
 x3  =  16 

3
16
max 3

Page 86 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

Example 4.2.14
min z = 2x1 + x2
subject to 3x1 + x2 ≥ 9
x1 + x2 ≥ 6
x1 , x2 ≥ 0

   3 
x1 2
 x2  =  9 
2
15
min z 2
It is illuminating to look at the graphical solution too.
Example 4.2.15
max z = 2x1 + 3x2 − 5x3
subject to x1 + x2 + x3 = 7
2x1 − 5x2 + x3 ≥ 10
x1 , x2 , x3 ≥ 0

 
45 4 102
(x1 , x2 , x3 , max z) = , , 0,
7 7 7

Remark 4.2.1
(a) If in any iteration, there is a tie for entering variable between an artificial variable and
other variable (decision, surplus or slack), we must prefer the non-artificial variable to
enter the basis.
(b) If in any iteration, there is a tie for leaving variable between an artificial variable and
other variable (decision, surplus or slack), we must prefer the artificial variable to leave
the basis.
(c) If in the final optimal tableau, an artificial variable is present in the basis at a non-zero
level, this means our original problem has no feasible solution.
Example 4.2.16
max z = 5x1 + 6x2
subject to −2x1 + 3x2 = 3
x1 + 2x2 ≤ 5
6x1 + 7x2 ≤ 3
x1 , x2 ≥ 0

After going through all the steps


max z = 5x1 + 6x2 − M s1
subject to −2x1 + 3x2 + s1 = 3
x1 + 2x2 + s2 = 5
6x1 + 7x2 + s3 = 3
x1 , x2 ≥ 0

D.W-Ddumba linear programming Page 87 of 186


4.2. THE METHOD

Tableau #1
cj 5 6 -M 0 0 0
cBj xB x1 x2 s1 s2 s3 x̄B
-M s1 -2 3 1 0 0 3
0 s2 1 2 0 1 0 5
?
0 s3 6 7 0 0 1 3
zj − cj -5+2M -6-3M 0 0 0 -3M
Tableau #2
cj 5 6 -M 0 0 0
cBj xB x1 x2 s1 s2 s3 x̄B

−32 −3 12
-M s1 7
0 1 0 7 7
⇐=

−5 −2 29
0 s2 7
0 0 1 7 7

−32 −3 3
6 x2 7
0 0 0 7 7

1 32M 6 3M 18 12M
zj − cj 7
+ 7
0 0 0 7
+ 7 7
− 7

This is the optimal tableau. As s1 is not zero, there is NO feasible solution.


Example 4.2.17

min z = 4x1 + 6x2


subject to −2x1 + 3x2 = 3
4x1 + 5x2 ≥ 10
4x1 + 8x2 ≥ 5
x1 , x2 ≥ 0

 
15 16 126
(x1 , x2 , max z) = , ,
22 11 11

Example 4.2.18

Maximize w = 2x1 + 4x2 + 4x3 − 3x4


subject to x 1 + x2 + x3 = 4
x1 + 4x2 + x4 = 8
x1 , x2 , x3 , x4 ≥ 0

Normally, when there are equality constraints, we add artificial variables and use either Big
M method or two phase method or primal-dual. But in this problem we already have a 2 × 2
identity submatrix in the coefficient matrix of the constraint equations. Hence there is no
need to add artificial variables. We can start the primal simplex algorithm with x3 , x4 as basic
variables.
x1 = 0, x2 = 2, x3 = 2, x4 = 0 ; max w = 16

Page 88 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

Example 4.2.19

Maximize z = x1 + 5x2 + 3x3


subject to x1 + 2x2 + x3 = 3
2x1 − x2 = 4
x1 , x2 , x3 ≥ 0

x3 in the first constraint is already basic, we have to add an artificial variable to the second
constraint equation. And then we solve it by big M method.

Maximize z = x1 + 5x2 + 3x3 − M s1


subject to x1 + 2x2 + x3 = 3
2x1 − x2 + s1 = 4
x1 , x2 , x3 , s1 ≥ 0

cj 1 5 3 -M 0
cBj xB x1 x2 x3 s1 x̄B
3 x3 1 2 1 0 3
-M s1 2? -1 0 1 4
zj − cj -1-2M -5+M 0 0 -4M
5? −1
3 x3 0 2
1 2
1
−1 1
1 x1 1 2
0 2
2
−11 1
zj − cj 0 2
0 2
+M 2
2 −1 2
5 x2 0 1 5 5 5
1 2 11
1 x1 11 0 5 5 5

11 −3 21
zj − cj 0 0 5 5
+M 5

This is the Optimal tableau. Optimal Soln:

11 2 21
x1 = , x2 = , x3 = 0, max z =
5 5 5

Remark 4.2.2 Since M is a very big positive number say 1, 000, 000, the term − 53 + M is


positive. That is why we say, final tableau.

Example 4.2.20 Use the big M method to show that the following problem has no feasible
solution:

Maximize z = 2x1 + 5x2


subject to 3x1 + 2x2 ≥ 6
2x1 + x2 ≤ 2
x1 , x2 ≥ 0

D.W-Ddumba linear programming Page 89 of 186


4.2. THE METHOD

After all four steps


Maximize z = 2x1 + 5x2 − M R1
subject to 3x1 + 2x2 − s1 + R1 = 6
2x1 + x2 + s2 = 2
x1 , x2 , s1 , s2 , R1 ≥ 0

We now start the Simplex algorithm.


Tableau #1
cj 2 5 0 -M 0 0
cBj xB x1 x2 s1 R1 s2 x̄B
-M R1 3 2 -1 1 0 6
0 s2 2? 1 0 0 1 2
zj − cj -2-3M -5-2M M 0 0 - 6M
Tableau #2
cj 2 5 0 -M 0 0
cBj xB x1 x2 s1 R1 s2 x̄B
1 −3
-M R1 0 2
-1 1 2
3
1 1
2 x1 1 2
? 0 0 2
1
M
zj − cj 0 −4 − 2
M 0 1 + 3M
2
2-3M
Tableau #3
cj 2 5 0 -M 0 0
cBj xB x1 x2 s1 R1 s2 x̄B
-M R1 -1 0 -1 1 -2 2
5 x2 2 1 0 0 1 2
zj − cj 8+M 0 M 0 5+ 4M 10-2M
Remark 4.2.3 Thus we have reached the optimal tableau but the artificial variable R1 is
present at non-zero level. This means the original LPP has no feasible solution. This can be
checked graphically also .
Assignment 4.2.1
Maximise P = 7x + 4y
subject to 2x + y ≤ 150
4x + 3y ≤ 350
x + y ≥ 80
x, y ≥ 0

(x, y; P ) = (50, 50; 550)


Assignment 4.2.2
Maximise P = 2x + 5y
subject to x + 4y ≤ 60
3x + 2y ≤ 40
x + y ≥ 12
x, y ≥ 0

Page 90 of 186 D.W-Ddumba linear programming


4.2. THE METHOD

(x, y; P ) = (4, 14; 78)

Assignment 4.2.3

Maximise P = 10x + 2y
subject to −x + 2y ≤ 60
5x + 4y ≤ 260
−x + 8y ≥ 80
x, y ≥ 0

(x, y; P ) = (40, 15; 430)

Assignment 4.2.4

Maximise P = 8x + 4y
subject to 2x + 3y ≤ 120
x + y ≤ 45
−3x + 5y ≥ 25
x, y ≥ 0

(x, y; P ) = (25, 20; 280)

Assignment 4.2.5

Maximise P = 11x + 15y


subject to 3x + 5y ≤ 130
−4x + 5y ≥ 25
x + 5y ≥ 75
x, y ≥ 0

(x, y; P ) = (15, 17; 420)

Assignment 4.2.6

Maximise P = 24x + 21y + 30z


subject to 12x + 4y + 8z ≤ 240
8x + 3y + 3z ≤ 140
6x + 2y + 3z ≥ 110
x, y, z ≥ 0

(x, y, z; P ) = (10, 10, 10; 750)

D.W-Ddumba linear programming Page 91 of 186


4.2. THE METHOD

Assignment 4.2.7

Minimise P = −3x + 4y
subject to x + 3y ≤ 54
3x + y ≤ 34
−x + 2y ≥ 12
x, y ≥ 0

(x, y; P ) = (8, 10; 16)

Assignment 4.2.8

Minimise P = 2x − 8y
subject to 8x + 4y ≤ 80
−3x + 4y ≥ 8
x + 4y ≥ 40
x, y ≥ 0

(x, y; P ) = (8, 8; 48)

Page 92 of 186 D.W-Ddumba linear programming


Chapter 5

The Dual simplex method

There is a technique for solving min problems and even max problems. Any min problem
, write its corresponding max problem (by just multiplying by a -ve) and solve that by this
method.
Dual-Simplex
Min Max
objective vector c > 0 all cs c < 0 all cs
change to max so that c < 0 all cs solve tabular
Because the zj − cj have to be positive even before starting to solve, as now we are making x̄B
to be positives not zj − cj as in Primal-simplex. But still max problems most.
Thus change (its corresponding, not dual) so that all the objective vectors c are all
negative before you start:
Although we don’t go into details here, there is another reason why duality proves valuable,
associated with the need, in real life, to modify the constraints of a problem. Supposed we have
already solved (say) a maximizing problem. We can describe the solution as feasible (every
entry in the last column is non-negative) and optimal (every entry in the bottom row is non-
negative). Adding an additional constraint is likely to render the “current” solution infeasible,
although it will remain optimal in the above sense. Applying the conventional algorithm will
first work on the (potentially) bad row, and then re-do the optimization. In contrast the dual
problem is likely to become non optimal (a bad row “duals” to a proper sign for improvement)
but feasible (an optimal objective row duals to a feasible but non optimal solution). As such,
restoring optimality is likely in practice to be quicker working on the dual.

5.1 Dual Simplex method


The dual simplex method is constructed to start from the kind of initial position. We shall show
that, it is precisely the simplex method applied to the dual but constructed so as to operate
within a standard primal simplex tableau. Operationally, the algorithm still involves a sequence
of pivot steps in the tableau, but with different rules of choosing the pivot element.

5.2 Primal Vs Dual


In most times the primal simplex algorithm keeps the primal problem feasible at all steps ( ie
x ≥ 0) and when we achieve zj − cj ≥ 0 then the primal problem is optimal.
The implication of zj − cj ≥ 0 about dual feasibility.

93
5.3. DUAL METHODOLOGY

from zj − cj = cB yj − cj so zj − cj ≥ 0 ⇒ cB Y − c ≥ 0 that’s cB Y ≥ c and with Y = B −1 A we


have cB B −1 A ≥ c which indicates that u = cB B −1 is a feasible solution to the dual problem,
thus the condition zj − cj ≥ 0 also implies dual feasibility.
A basic solution is optimal if and only if it is both dual and primal feasible.
The dual simplex method starts with the dual feasible solution and maintains it’s dual feasibility
at all iterations and tries to attain primal feasibility( ie xi ≥ 0 ∀ i). The dual simplex method
is carried out in the same tableau as the primal simplex method.

5.3 Dual methodology


The method can be described in the following steps,
(i) Start with a tableau in which zj − cj ≥ 0 for j = 1, 2, 3, . . . , n that is write the problem
in a form with dual feasibility if possible.
(ii) If xBi > 0 for i = 1, 2, 3, . . . , m then the problem is optimal. If not fix an xk < 0 such
that xk = min xi or xk = max |xi | for xi < 0 and the variable xk is to become a non basic
variable.

(iii) Entry Criterion


Select a value of bi < 0 such that bk selected is minimum.

(iv) Exit criterion


n o
zj −cj zk −cj
max yik
= min ylk .
j;yij <0

(v) The usual simplex algorithm works thereafter and back to (i).
Note 5.3.1
• Unlike in primal simplex where you had to first find the minimum zj − cj , here you have
to first find minimum of X¯B
¯
• Unlike in Primal where you had to find min XyijB with yij > 0, in dual simplex we get

z −c ¯
z −c
min kylk j = max jyij j with yij < 0 considered

Class 5.3.1 Find the optimal solution to the following LP problem using the simplex algo-
rithm,
min w = 2x1 + 3x2 + 3x3
subject to 2x1 − 3x2 + x3 ≥ 4
x1 + x2 + 2x3 ≥ 3
x1 , x2 , x3 ≥ 0
Solution
write the LP problem as a maximization problem as follows;
max w = −2x1 − 3x2 − 3x3
subject to − 2x1 + 3x2 − x3 ≤ −4
−x1 − x2 − 2x3 ≤ −3
x1 , x2 , x3 ≥ 0

Page 94 of 186 D.W-Ddumba linear programming


5.3. DUAL METHODOLOGY

Re- write the LP problem in standard form with slack variables as follows,

max w = −2x1 − 3x2 − 3x3


subject to − 2x1 + 3x2 − x3 + x4 = −4
−x1 − x2 − 2x3 + x5 = −3
x1 , x2 , x3 , x4 , x5 ≥ 0

Transform the information into the tableau and use the dual simplex method to obtain,
cj −2 −3 -3 0 0 0
cBj xB x1 x2 x3 x4 x5 xB
0 x4 −2? 3 -1 1 0 -4
0 x5 -1 -1 -2 0 1 -3
zj − cj 2 3 3 0 0 0
−2 x1 1 − 23 1
2 ?
− 12 0 2
0 x5 0 − 25 − 32 − 12 1 -1
zj − cj 0 6 2 1 0 −4
−7 −2 1 5
−2 x1 1 3
0 3 3 3
5 1
−3 x3 0 3
1 3
− 23 2
3
8 1 4
zj − cj 0 3
0 3 3
− 16
3

The optimal solution therefore is x?1 = 53 , x?2 = 0, x?3 = 32 , x?4 = 0, x?5 = 0 and w = 16
3

Exercise 5.1 Solve the following LP problems using the Dual simplex method,

(i)

min w = 2x1 + 15x2 + 18x3


subject to − x1 + 2x2 − 6x3 ≤ −10
x2 + 2x3 ≤ 6
2x1 + 11x3 ≤ 19
−x1 + x2 ≤ −2
x1 , x2 , x3 ≥ 0

(ii)

max z = −5x1 − 35x2 − 20x3


subject to x1 − x2 − x3 ≤ −2
−x1 − 3x2 ≤ −3
x1 , x2 , x3 ≥ 0

(iii)

max w = 2x1 + 3x2


subject to − x1 + x2 ≤ 5
x1 + 3x2 ≤ 35
x1 ≤ 20
−x1 + x2 ≤ −2
x1 , x2 ≥ 0

D.W-Ddumba linear programming Page 95 of 186


5.3. DUAL METHODOLOGY

(iv)

max w = x1 + x2
subject to 3x1 + x2 ≥ 12
−3x1 + 5x2 ≤ 12
x1 , x2 ≥ 0

Example 5.3.1 M inz = 2x1 + 15x2 + 18x3 subject to:

−x1 + 2x2 − 6x3 ≤ −10


x2 + 2x3 ≤ 6
2x1 + 11x3 ≤ 19
−x1 + x2 ≤ −2
x1 , x2 , x3 ≥ 0

Here the objective vector is not all negatives, thus get a way to so by writing it as max.

M ax z = −2x1 − 15x2 − 18x3


subject to:

−x1 + 2x2 − 6x3 ≤ −10


x2 + 2x3 ≤ 6
2x1 + 11x3 ≤ 19
−x1 + x2 ≤ −2
x1 , x2 , x3 ≥ 0

M axz = −2x1 − 15x2 − 18x3 subject to:

−x1 + 2x2 − 6x3 + x4 = −10


x2 + 2x3 + x5 = 6
2x1 + 11x3 + x6 = 19
−x1 + x2 + x7 = −2
x1 , x2 , x3 ≥ 0

The method is explained in the following three tableaux.

cj −2 −15 -18 0 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x7 x̄B
?
0 x4 -1 2 -6 1 0 0 0 -10
0 x5 0 1 2 0 1 0 0 6
0 x6 2 0 11 0 0 1 0 19
0 x7 -1 1 0 0 0 0 1 -2
zj − cj 2 15 18 0 0 0 0 0

Page 96 of 186 D.W-Ddumba linear programming


5.3. DUAL METHODOLOGY

cj −2 −15 -18 0 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x7 x̄B
-2 x1 1 -2 6 -1 0 0 0 10
0 x5 0 1 2 0 1 0 0 6
?
0 x6 0 4 -1 2 0 1 0 -1
0 x7 0 -1 6 -1 0 0 1 8
zj − cj 0 19 6 2 0 0 0 -20

cj −2 −15 -18 0 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x7 x̄B
-2 x1 1 22 0 11 0 6 0 4
0 x5 0 9 0 4 1 2 0 4
-18 x3 0 -4 1 -2 0 -1 0 1
0 x7 0 23 0 11 0 6 1 2
zj − cj 0 43 0 14 0 6 0 -26

X ? = (4, 0, 1, 0, 4, 0, 2)
minimum value of z = −(−26) = 26

Class 5.3.2

Example 5.3.2 Use dual simplex to solve the L.P.P

max z = −5x1 − 35x2 − 20x3


subject to x1 − x2 − x3 ≤ −2
−x1 − 3x2 ≤ −3
x1 , x2 , x3 ≥ 0

Step(a): Canonical form the problem: (already in Canonical form)

Step(b): Standard form:

max z = −5x1 − 35x2 − 20x3


subject to x1 − x2 − x3 + x4 = −2
−x1 − 3x2 + x5 = −3
x1 , x2 , x3 ≥ 0

Step(c): RHS = +ve:


Step omitted bse if try to make RHS=+ve, will eventually have no or less basic variables,
and in trying to have the basic variables we will need artificial variables which takes us
back to Big M.

Step(d): Basic variables.


Here we just consider the slack variables x4 and x5 as basic variables.

Step(e): Enter Tableau.

D.W-Ddumba linear programming Page 97 of 186


5.3. DUAL METHODOLOGY

cj -5 -35 -20 0 0 0
cBj xB x1 x2 x3 x4 x5 x̄B
0 x4 1 -1 -1 1 0 -2
0 x5 −1? -3 0 0 1 -3
zj − cj 5 35 20 0 0 0
0 x4 0 −4? -1 1 1 -5
-5 x1 1 3 0 0 -1 3
zj − cj 0 20 20 0 5 -15
-35 x2 0 1 0.25 -0.25 -0.25 1.25
-5 x1 1 0 −0.75? 0.75 -0.25 -0.75
zj − cj 0 0 15 5 10 -40
-35 x2 0.333 1 0 0 -0.333 1
-20 x3 -1.33 0 1 -1 0.333 1
zj − cj 20 0 0 20 5 -55

maxz = −55

Note 5.3.2 There is a problem with Big M, if the artificial variables do not go to zero, then
the problem we will a different problem. To make matters worse, you can only realize such at
the end of so many tableaus.

Example 5.3.3 Solve the problem

min 2x + 5y + 6z
subject to x + 2y + z ≥ 3
x + 3y + 4z ≥ 4
x, y, z ≥ 0

We can not solve a min problem, we change it to max problem

maximize −2x − 5y − 6z
subject to x + 2y + z ≥ 3
x + 3y + 4z ≥ 4
x, y, z ≥ 0

We realize all cj < 0, the best option is dual-simplex. Graphical cant, since the variables in
question exceed 2. Primal simplex will not work since we shall not have enough basic variables
with RHS +ves. Big M has a very big disadvantage that we realize it cant work unless after
solving, so it is not a good technique unless asked to do so.

Step(a): canonical form the problem:

maximize −2x − 5y − 6z
subject to −x − 2y − z ≤ −3
−x − 3y − 4z ≤ −4
x, y, z ≥ 0

Page 98 of 186 D.W-Ddumba linear programming


5.3. DUAL METHODOLOGY

Step(b): standard form:

maximize −2x − 5y − 6z
subject to −x − 2y − z + s1 = −3
−x − 3y − 4z + s2 = −4
x, y, z, s1 , s2 ≥ 0

Step(c): Omitted

Step(d): Basic variables: The basic variables are already s1 , s2 .

Step(e): Enter Tableau.

cj -2 -5 -6 0 0 0
cBj xB x y z s1 s2 x̄B
0 s1 -1 -2 -1 1 0 -3
0 s2 -1 -3 −4? 0 1 -4 ⇐
zj − cj 2 5 6 0 0 0
−3 −5 ? −1
0 s1 4 4
0 1 4
-2 ⇐
1 3 −1
-6 z 4 4
1 0 4
1
1 1 3
zj − cj 2 2
0 0 2
-6
3 −4 1 8
-5 y 5
1 0 5 5 5
−1 ? 3 −2 −1
-6 z 5
0 1 5 5 5

1 2 7 −34
zj − cj 5
0 0 5 5 5
-5 y 0 1 3 1 -1 1
-2 x 1 0 -5 -3 -2 1
zj − cj 0 0 1 1 1 -7

(x, y, z, max, min) = (1, 1, 0, −7, 7)

Example 5.3.4 Use dual simplex to solve the L.P.P

min z = x1 + x2
subject to 2x1 + x2 ≥ 2
−x1 − x2 ≥ 1
x1 , x2 ≥ 0

Since we cant solve a min, we change to a max

max z = −x1 − x2
subject to 2x1 + x2 ≥ 2
−x1 − x2 ≥ 1
x1 , x2 ≥ 0

D.W-Ddumba linear programming Page 99 of 186


5.3. DUAL METHODOLOGY

Step(a): canonical form the problem:

max z = −x1 − x2
subject to −2x1 − x2 ≤ −2
x1 + x2 ≤ −1
x1 , x2 ≥ 0

Step(b): standard form:

max z = −x1 − x2
subject to −2x1 − x2 + s1 = −2
x1 + x2 + s2 = −1
x1 , x2 , s1 , s2 ≥ 0

Step(c): RHS = +ve:


Step omitted because if try to make RHS=+ve, will eventually have no or less basic
variables, and in trying to have the basic variables we will need artificial variables which
takes us back to Big M.

Step(d): Basic variables.


Here we just consider the slack variables s1 and s2 as basic variables.

Step(e): Enter Tableau.

cj -1 -1 0 0 0
cBj xB x1 x2 s1 s2 x̄B
0 s1 −2? -1 1 0 -2
0 s2 1 1 0 1 -1
zj − cj 1 1 0 0 0
1 −1
0 s1 1 2 2
0 1
1 1
0 s2 0 2 2
1 -2
1 1
zj − cj 0 2 2
0 -1

Since X¯B2 is negative, mark the second row as the pivotal row, which yields s2 as the
outgoing variable. But since s2 row contains no non-negative coefficients, no other variable
enters the basis. Thus the given LPP has no feasible solution.

Exercise 5.2

maximize = −2x1 − 3x2


subject to −x1 − x2 ≤ −3
2x1 − x2 ≤ −2
x1 , x2 ≥ 0

Since all cj < 0 in a max, its obviously dual-simplex.

Page 100 of 186 D.W-Ddumba linear programming


5.3. DUAL METHODOLOGY

Class 5.3.3 Use dual simplex to solve the L.P.P


min z = x1 + 1.5x2
subject to −0.5x1 + x2 ≥ 7.5
2x1 + x2 ≥ 15
x1 , x2 ≥ 0

Since we cant solve a min, we change to a max


max z = −x1 − 1.5x2
subject to −0.5x1 + x2 ≥ 7.5
2x1 + x2 ≥ 15
x1 , x2 ≥ 0

Step(a): canonical form the problem:


max z = −x1 − 1.5x2
subject to 0.5x1 − x2 ≤ −7.5
−2x1 − x2 ≤ −15
x1 , x2 ≥ 0

Step(b): standard form:


max z = −x1 − 1.5x2
subject to 0.5x1 − x2 + s1 = −7.5
−2x1 − x2 + s2 = −15
x1 , x2 , s1 , s2 ≥ 0

Step(c): RHS = +ve:


Step omitted because if try to make RHS=+ve, will eventually have no or less basic
variables, and in trying to have the basic variables we will need artificial variables which
takes us back to Big M.
Step(d): Basic variables.
Here we just consider the slack variables s1 and s2 as basic variables.
Step(e): Enter Tableau.
cj -1 -1.5 0 0 0
cBj xB x1 x2 s1 s2 x̄B
0 s1 -0.5 -1 1 0 -7.5
0 s2 −2? -1 0 1 -15
zj − cj 1 1.5 0 0 0
−3 ? −1 −15
0 s1 0 4
1 4 4
1 −1 15
-1 x1 1 2
0 2 2
1 −15
zj − cj 0 1 0 2 2
−4 1
-1.5 x2 0 1 3 3
5
2 −2
-1 x1 1 0 3 3
5
4 1
zj − cj 0 0 3 6
-12.5

D.W-Ddumba linear programming Page 101 of 186


5.3. DUAL METHODOLOGY

maxz = −12.5, ⇒ minz = 12.5

Example 5.3.5 Repeat the problem above using the graphical method.

Example 5.3.6 Solve the following LPP problem

min π = 2x1 + 2x2 + 4x3


subject to 2x1 + 3x2 + 5x3 ≥ 2
3x1 + x2 + 7x3 ≤ 3
x1 + 4x2 + 6x3 ≤ 5
x1 , x2 , x3 ≥ 0

Since we cant solve a min, we change to a max

max z = −2x1 − 2x2 − 4x3


subject to 2x1 + 3x2 + 5x3 ≥ 2
3x1 + x2 + 7x3 ≤ 3
x1 + 4x2 + 6x3 ≤ 5
x1 , x2 , x3 ≥ 0

Step(a): canonical form the problem:

max z = −2x1 − 2x2 − 4x3


subject to −2x1 − 3x2 − 5x3 ≤ −2
3x1 + x2 + 7x3 ≤ 3
x1 + 4x2 + 6x3 ≤ 5
x1 , x2 , x3 ≥ 0

Step(b): standard form:

max z = −2x1 − 2x2 − 4x3


subject to −2x1 − 3x2 − 5x3 + x4 = −2
3x1 + x2 + 7x3 + x5 = 3
x1 + 4x2 + 6x3 + x6 = 5
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0

Step(c): RHS = +ve:


Step omitted because if try to make RHS=+ve, will eventually have no or less basic
variables, and in trying to have the basic variables we will need artificial variables which
takes us back to Big M.

Step(d): Basic variables.


Here we just consider the slack variables x4 , x5 and x6 as basic variables.

Page 102 of 186 D.W-Ddumba linear programming


5.3. DUAL METHODOLOGY

Step(e): Enter Tableau.


The initial tableau is given as

cj -2 -2 -4 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x̄B
0 x4 -2 −3? -5 1 0 0 -2
0 x5 3 1 7 0 1 0 3
0 x6 1 4 6 0 0 1 5
zj − cj 2 2 4 0 0 0 0

Show that (more than four tableaus could be required)

x1 = 4, x2 = 3, x3 = 0; max z = −18, ⇒ min π = 18

Example 5.3.7 Suppose we are given the problem

min z = 2x1 + 3x2 + 4x3 + 5x4


subject to x1 − x2 + x3 − x4 ≥ 10
x1 − 2x2 + 3x3 − 4x4 ≥ 6
3x1 − 4x2 + 5x3 − 6x4 ≥ 15
x1 , x2 , x3 , x4 ≥ 0

Since we cant solve a min, we change to a max

max z = −2x1 − 3x2 − 4x3 − 5x4


subject to x1 − x2 + x3 − x4 ≥ 10
x1 − 2x2 + 3x3 − 4x4 ≥ 6
3x1 − 4x2 + 5x3 − 6x4 ≥ 15
x1 , x2 , x3 , x4 ≥ 0

Step(a): canonical form the problem:

max z = −2x1 − 3x2 − 4x3 − 5x4


subject to −x1 + x2 − x3 + x4 ≤ −10
−x1 + 2x2 − 3x3 + 4x4 ≤ −6
−3x1 + 4x2 − 5x3 + 6x4 ≤ −15
x1 , x2 , x3 , x4 ≥ 0

Step(b): standard form:

max z = −2x1 − 3x2 − 4x3 − 5x4


subject to −x1 + x2 − x3 + x4 + x5 = −10
−x1 + 2x2 − 3x3 + 4x4 + x6 = −6
−3x1 + 4x2 − 5x3 + 6x4 + x7 = −15
x1 , x2 , x3 , x4 ≥ 0

D.W-Ddumba linear programming Page 103 of 186


5.3. DUAL METHODOLOGY

Step(c): RHS = +ve:


Step omitted because if try to make RHS=+ve, will eventually have no or less basic
variables, and in trying to have the basic variables we will need artificial variables which
takes us back to Big M.

Step(d): Basic variables.


Here we just consider the slack variables x5 , x6 and x7 as basic variables.

Step(e): Enter Tableau.

cj -2 -3 -4 -5 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x7 x̄B
0 x5 −1? 1 -1 1 1 0 0 -10
0 x6 -1 2 -3 4 0 1 0 -6
0 x7 -3 4 -5 6 0 0 1 -15
zj − cj 2 3 4 5 0 0 0 0
-2 x1 1 -1 1 -1 -1 0 0 10
0 x6 0 1 -2 3 -1 1 0 4
0 x7 0 1 -2 3 -3 0 1 15
zj − cj 0 5 2 7 2 0 0 -20

Now every X¯Bi for i > 0 is non-negative. So, the tableau is optimal.

Example 5.3.8 But suppose the boss adds now the new restriction:

x1 + 2x2 + 3x3 − 4x4 ≤ 8

With the dual simplex, we do not start from scratch. We simply add the new row and one
more column to our tableau. Show that
32 2
x1 = , x2 = 0, x3 = 0, x4 = , x5 = 0, x6 = 2, x7 = 13, x8 = 0
3 3

Example 5.3.9 Consider the following tableau, and solve it using the dual simplex method.

cj
cBj xB x1 x2 x3 x4 x5 x̄B
1 1 0 2 0 1
-1 0 -1 0 1 -2
zj − cj 1 0 2 1 0 -3

Using the tableau, what are the basic variables?


Making the dual simplex, gives

cj
cBj xB x1 x2 x3 x4 x5 x̄B
0 1 -1 2 1 -1
1 0 1 0 -1 2
zj − cj 0 0 1 1 0 -5

Which gives us the optimal tableau:

Page 104 of 186 D.W-Ddumba linear programming


5.3. DUAL METHODOLOGY

cj
cBj xB x1 x2 x3 x4 x5 x̄B
0 -1 1 -2 -1 1
1 1 0 2 0 1
zj − cj 0 1 0 3 2 -6
x1 = 1, x2 = 0, x3 = 1, x4 = 0, x5 = 0, z = −6

Example 5.3.10 Using the example below, demonstrate the dual-simplex algorithm [four
tableaus]

max −2x − 5y − 6z
subject to x + 2y + z ≥ 3
x + 3y + 4z ≥ 4
x, y, z ≥ 0

(x, y, z, max) = (1, 1, 0, −7)

Example 5.3.11 Use dual simplex method to

max z = −3x1 − 2x2


subject to x1 + x2 ≥ 1
x1 + x2 ≤ 7
x1 + 2x2 ≥ 10
x2 ≤ 3
x1 , x2 ≥ 0

After all the steps

max z = −3x1 − 2x2


subject to −x1 − x2 + x3 = −1
x1 + x2 + x4 = 7
−x1 − 2x2 + x5 = −10
x2 + x6 = 3
x1 , x2 , . . . , x6 ≥ 0

Tableau #1
cj -3 -2 0 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x̄B
0 x3 -1 -1 1 0 0 0 -1
0 x4 1 1 0 1 0 0 7
0 x5 -1 −2? 0 0 1 0 -10
0 x6 0 1 0 0 0 1 3
zj − cj 3 2 0 0 0 0 0

D.W-Ddumba linear programming Page 105 of 186


5.3. DUAL METHODOLOGY

Tableau #2
cj -3 -2 0 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x̄B
−1 −1
0 x3 2
0 1 0 2
0 4
1 1
0 x4 2
0 0 1 2
0 2
1 −1
-2 x2 2
1 0 0 2
0 5
−1? 1
0 x6 2
0 0 0 2
1 -2
zj − cj 2 0 0 0 -1 0 -10
Tableau #3
cj -3 -2 0 0 0 0 0
cBj xB x1 x2 x3 x4 x5 x6 x̄B
0 x3 0 0 1 0 -1 -1 6
0 x4 0 0 0 1 1 1 0
-2 x2 0 1 0 0 0 1 3
-3 x1 1 0 0 0 -1 -2 4
zj − cj 0 0 0 0 3 4 -18
x1 = 4, x2 = 3, z = −18

Example 5.3.12

Minimize z = 3x1 + 5x2 + 3x3


subject to x1 + 4x2 + x3 ≥ 7
2x1 + x2 + x4 ≥ 10
x1 , x2 , x3 , x4 ≥ 0

Step #I Always to solve a max problem.

Maximize π = −3x1 − 5x2 − 3x3


subject to x1 + 4x2 + x3 ≥ 7
2x1 + x2 + x4 ≥ 10
x1 , x2 , x3 , x4 ≥ 0

Since all cj < 0, are all negatives, its better to use dual simplex algorithm.
7 33 35
x1 = 0, x2 = , x3 = 0, x4 = ; min z =
4 4 4

Example 5.3.13 Perform pivot step and go to next iteration with new basic vector.
Original tableau
x1 x2 x 3 x4 x5 x6 x̄B
1 0 0 4 -5 7 8
0 1 0 -2 4 -2 -2
0 0 1 1 -3 2 2
0 0 0 1 3 2 0

Page 106 of 186 D.W-Ddumba linear programming


5.3. DUAL METHODOLOGY

Example 5.3.14 Perform pivot step and go to next iteration with new basic vector using the
dual simplex algorithm.
Original tableau
x1 x2 x3 x4 x5 x6 x̄B
1 1 1 1 0 0 1
-1 -2 -1 0 1 0 -8
1 3 2 0 0 1 5
8 24 15 0 0 0 0
State the initial basic variables.
Note 5.3.3 As before, pivot step is:
(i) nondegenerate if min ratio is > 0
(ii) degenerate if min ratio is 0
Remark 5.3.1 Usefulness of Dual Simplex Algorithm:
Not used to solve new LPs, because the dual simplex min ratio test needs O(n) comparisons
in every pivot step (primal simplex min ratio test needs only O(m) comparisons in each step,
and in most real world models n >> m).
However, dual simplex Algorithm. very useful in sensitivity analysis. After problem solved, if
changes occur in RHS constants vector, dual simplex iterations are used to get new opt.
Remark 5.3.2 Dual Simplex Method:
If an initial dual feasible basis not available, an artificial dual feasible basis can be constructed
by getting an arbitrary basis, and then adding one artificial constraint.
Though mathematically well specified, this method not used much in practice.
Remark 5.3.3 The idea behind the dual simplex method is to start with a legitimate, optimal
tableau and move toward a feasible tableau without losing legitimacy or optimality. A single
iteration of the Dual Simplex Method consists of the following:
(1.) Choose the departing row (basic variable) to be any row where the RHS is ¡ 0.
(2.) The candidates for the entering variable are the non basic variables which have negative
entries in the departing row. If there are no candidates for entering the problem is
infeasible.
(3.) If there are candidates for entering, choose one which maintains optimality. The first row
operation in the pivot process can be adding a positive multiple of the departing row to
the objective row. The entering variable will be one with the smallest possible multiple.
(4.) Pivot. Repeat step 1 until an optimal solution is found or until the problem is shown
to be infeasible. After adding slack variables, the initial tableau for the following simple
problem is optimal and legitimate, but it is not feasible. We use the DSM to obtain
feasibility without losing optimality or legitimacy.
Example 5.3.15
M AXW = −X1 − 2X2
SUBJECT TO −X1 + 2X2 − X3 ≤ −4
−2X1 − X2 + X3 ≤ −6

The initial tableau after adding slack variables is:

D.W-Ddumba linear programming Page 107 of 186


5.3. DUAL METHODOLOGY

(BASIS) X1 X2 X3 X4 X5
X4 -1.000 2.000 -1.000 1.000 0.000 -4.000
X5 -2.000 -1.000 1.000 0.000 1.000 -6.000
Z-C 1.000 2.000 0.000 0.000 0.000 0.000
X1 ENTERS AT VALUE 3.0000 IN ROW 2 OBJ. VALUE = −3.0000
Question : If X4 departed rather than X5, which variable would have entered?

(BASIS) X1 X2 X3 X4 X5
X4 0.000 2.500 -1.500 1.000 -0.500 -1.000
X1 1.000 0.500 -0.500 0.000 -0.500 3.000
Z-C 0.000 1.500 0.500 0.000 0.500 -3.000

X3 ENTERS AT VALUE 0.66667 IN ROW 1 OBJ. VALUE = −3.3333

(BASIS) X1 X2 X3 X4 X5
X3 0.000 -1.667 1.000 -0.667 0.333 0.667
X1 1.000 -0.333 0.000 -0.333 -0.333 3.333
Z-C 0.000 2.333 0.000 0.333 0.333 -3.333

This tableau is now optimal, feasible and legitimate. The optimal solution is:

X1 = 3.3333, X2 = 0, X3 = 0.667, X4 = 0, X5 = 0, W = −3.333

Page 108 of 186 D.W-Ddumba linear programming

You might also like