0% found this document useful (0 votes)
92 views65 pages

Linear Programming: Dr. G Srinivasan Industrial Management Division Iitm

1. The document describes solving a linear programming problem graphically and algebraically. 2. Graphically, the feasible region satisfying the constraints is identified, and the objective function is evaluated at the corner points to find the optimal solution. 3. Algebraically, slack variables are introduced to convert inequalities to equations, basic solutions are obtained by setting variables to zero, and the feasible basic solutions are evaluated to find the optimal value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views65 pages

Linear Programming: Dr. G Srinivasan Industrial Management Division Iitm

1. The document describes solving a linear programming problem graphically and algebraically. 2. Graphically, the feasible region satisfying the constraints is identified, and the objective function is evaluated at the corner points to find the optimal solution. 3. Algebraically, slack variables are introduced to convert inequalities to equations, basic solutions are obtained by setting variables to zero, and the feasible basic solutions are evaluated to find the optimal value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 65

Linear Programming

Dr. G SRINIVASAN
Industrial Management Division
IITM
Let us consider the following linear programming
problem (P1) given by

Maximize Z = 6X1 + 5X2


Subject to
X1 + X2  5 (2.1)
3X1 + 2X2  12 (2.2)
X1, X2  0

The problem has two variables and two constraints.


GRAPHICAL METHOD
X2
The shaded region has all
6
points satisfying all the
constraints.
(0,5) 3X1 + 2X2  12
5
All points in this region satisfy
all the constraints and are
4 feasible solutions.

(2,3) This region is called the feasible


3 region

2 X 1 + X2  5

(0,0) 1 2 3 4 5 6 7 X1
(4,0)
• We are interested only in the feasible region since we want to solve the given
problem to find the best solution satisfying all the constraints.

• Let us consider any point inside the feasible region, say (1,1).

• There is always one point to the right or above this point that is feasible and has a
higher value of the objective function.

• For example, points (1,2) and (2,1) have better values of objective function than the
point (1,1).

• This is because the objective function coefficients are both positive.


• This means that for every point inside the feasible region, we can move to the right or
move above till we come out of the feasible regions and the last point inside the feasible
region will have a better value of the objective function.

• If the objective function had negative coefficients, points to the left or below will be
superior.
• This means that it is enough to consider the boundary of the feasible region to search for
the best solution.
Let us consider the objective function 6X1 + 5X2.

3X1 + 2X2  12 The slope of the objective function is different


X2 from the slope of both of the constraints.
6
In such cases, we can show that for every point
(0,5)
on the boundary of the feasible region, one of the
5 corner points (points of intersection of the
constraints) will be superior in terms of the
objective function.
4
(2,3) Therefore, it is enough to consider only the corner
points to reach the best solution.
3

2 X1 + X2  5

(0,0) 1 2 3 4 5 6 7 X1
(4,0)
• In our example, there are four corner points. These are (0,0), (4,0), (0,5) and (2,3). Let
us evaluate the objective function at these points.

At (0,0) Z = 0
At (4,0) Z = 24
At (0,5) Z = 25
At (2,3) Z = 27

• We observe that the corner point (2,3) given by the solution X1=2, X2=3 has the
maximum objective function value of Z = 27. This is the best solution or optimal
solution.

• Another way to reach the optimum solution is to plot the objective function for some
arbitrary value, say 6X1 + 5X2 = 12 (say).

• Since we want to maximize 6X1 + 5X2, we plot another line for


6X1 + 5X2= 20 (say).
This line is parallel to the first line and is moving
in the direction of increase of the objective
3X1 + 2X2  12 function line. If we want to maximize 6X1 + 5X2,
X2 then we move it in the direction of increase.
6
(0,5) We can move the line until it comes out of the
feasible region.
5
The last point it will touch before it leaves the
feasible region is the corner point (2,3).
4
(2,3) This point is the feasible point that has the
highest value of the objective function and is
3 optimal.

2 X1 + X2  5 6X1 + 5X2 = 12

1
6X1 + 5X2 = 20

(0,0) 1 2 3 4 5 6 7 X1
(4,0)
Graphical method
1. Plot the constraints on a graph.

2. Plot also the non negativity restrictions (restrict yourself to the


quadrant where both X1 and X2 are  0

3. Identify the feasible region that contain the set of points


satisfying all the constraints.

4. Identify the corner points.

5. Evaluate the objective function at all the corner points.

6. That corner point that has the best value of the objective function
(maximum or minimum depending on the objective function) is
optimal.
The Algebraic method

• Let us consider the same example and illustrate the algebraic method.
Maximize Z = 6X1 + 5X2
Subject to
X1 + X 2  5 (2.1)
3X1 + 2X2  12 (2.2)

• Assuming that we know to solve linear equations, we convert the


inequalities into equations by adding slack variables X3 and X4
respectively.

• These two slack variables do not contribute to the objective function.


The Linear programming problem becomes
Maximize Z = 6X1 + 5X2 + 0 X3 + 0 X4
Subject to
X1 + X 2 + X 3 =5
3X1 + 2X2 + X4 = 12

• With the addition of the slack variables, we now have four variables and two
equations. With two equations, we can solve only for two variables.

• We have to fix any two variables to some arbitrary value and can solve for the
remaining two variables.

• The two variables that we fix arbitrary values can be chosen in 4C2 = 6 ways.

• In each of these six combinations, we can actually fix the variables to any arbitrary
value resulting in infinite number of solutions.
• However, we consider fixing the arbitrary value to zero and hence consider only six distinct
possible solutions.

• The variables that we fix to zero are called non-basic variables and the variables
that we solve are called basic variables.
• These solutions obtained by fixing the non basic variables to zero are called basic
solutions.
Maximize Z = 6X1 + 5X2 + 0 X3 + 0 X4
Subject to
X1 + X2 + X3 =5
3X1 + 2X2 + X4 = 12

The six basic solutions are:


1. Variables X1 and X2 are non-basic and set to zero. Substituting we get X3 = 5, X4 = 12 and the value of
the objective function Z = 0.

2. Variables X1 and X3 are non-basic and set to zero. Substituting, we solve for X2 = 5 and 2X2 + X4 = 12
and get X2 = 5, X4 = 2 and value of objective function Z = 25

3. Variables X1 and X4 are non-basic and set to zero. Substituting, we solve for X2 + X3 = 5 and 2X2 = 12
and get X2 = 6, X3 = -1

4. Variables X2 and X3 are non-basic and set to zero. Substituting, we solve for X1 = 5 and 3X1 + X4 = 12
and get X1 = 5, X4 = -3

5. Variables X2 and X4 are non-basic and set to zero. Substituting, we solve for X1 + X3 = 5 and 3X1 = 12
and get X1 = 4, X3 = 1 and value of objective function Z = 24

6. Variables X3 and X4 are non-basic and set to zero. Substituting, we solve for X1 + X2 = 5 and 3X1 + 2X2
= 12 and get X1 = 2, X2 = 3 and value of objective function Z = 27
• Among these six basic solutions, we observe that four are feasible.
• Those basic solutions that are feasible (satisfy all constraints) are called basic
feasible solutions.

• The remaining two (solutions 3 and 4) have negative values for some
variables and are therefore infeasible.
• We are interested only in feasible solutions and therefore do not evaluate
the objective function for infeasible solutions.
• Let us consider a non basic solution from the sixth solution. Also assume that
variables X3 and X4 are fixed to arbitrary values (other than zero).

• We have to fix them at non negative values because otherwise they will
become infeasible.

• Let us fix them as X3 = 1 and X4 = 1.

• On substitution, we get X1 + X2 = 4 and 3X1 + 2X2 = 11 and get X1 = 3, X2 = 1


and value of objective function Z = 23.

• This non-basic feasible solution is clearly inferior to the solution X1 = 2, X2 = 3


obtained as a basic feasible solution by fixing X3 and X4 to zero.

• The solution (3,1) is an interior point in the feasible region while the basic
feasible solution (2,3) is a corner point.

• We have already seen that it is enough only to evaluate corner points.


• We can observe that the four basic feasible solutions correspond to the
four corner points.

• Every non basic solution that is feasible corresponds to an interior point


in the feasible region and every basic feasible solution corresponds to a
corner point solution.

• In the algebraic method it is enough only to evaluate the basic solutions,


find out the feasible ones and evaluate the objective function to obtain
the optimal solution.
ALGEBRAIC METHOD
1. Convert the inequalities into equations by adding slack variables.

2. Assuming that there are m equations and n variables, set n-m (non
basic) variables to zero and evaluate the solution for the remaining
m basic variables. Evaluate the objective function if the basic
solution is feasible.

3. Perform Step 2 for all the nCm combinations of basic variables.

4. Identify the optimum solution as the one with the maximum


(minimum) value of the objective function.
The following are the limitations of the algebraic method:

• We have to evaluate all the nCm basic solutions before we obtain the optimum.

• Some basic solutions can be feasible and we have to evaluate them also.

• Among the basic feasible solutions, we don’t evaluate better and better solutions. Some
of the subsequently evaluated basic feasible solutions can have inferior value of the
objective function when compared to the best solution.

What we therefore need is a method that

• Does not evaluate any basic infeasible solution.

• Progressively obtains better and better feasible solutions

• Identifies the optimum solution the moment it is reached so that all basic feasible
solutions are not evaluated.
• The algorithm that has all the above characteristics is the simplex
algorithm.

• We first explain this algorithm in an algebraic form and then in the tabular
form.

Algebraic form of the simplex algorithm


Let us consider the same example to illustrate the method.

Maximize Z = 6X1 + 5X2 + 0 X3 + 0 X4


Subject to
X1 + X 2 + X 3 =5
3X1 + 2X2 + X4 = 12
X1 , X 2  0
Iteration 1
We start with a basic feasible solution with X3 and X4 as basic variables. We write the
basic variables in terms of the non-basic variables as
X3 = 5 – X1 – X2
X4 = 12 – 3X1 – 2X2
Z = 0 + 6X1 + 5X2

The present solution has Z = 0 since X1 and X2 are presently non basic with values
zero.

We want to increase Z and this is possible by increasing X1 or X2. We choose to


increase X1 by bringing it to the basis because it has the highest rate of increase.

Considering the equation X3 = 5 – X1 – X2, X1 can be increased to 5 beyond which X3


will be negative and infeasible.

Considering the equation X4 = 12 – 3X1 – 2X2, X1 can be increased to 4 beyond which


X4 will become negative.

The limiting value of X2 (or the allowable upper limit on X1) is 4 from the second
equation.
Iteration 2
• Since variable X1 becomes basic based on equation 2, we rewrite equation 2 as
3X1 = 12 – 2X2 – X4 (3)
X1 = 4 – 2/3 X2 – 1/3 X4
• Substituting for X1 in equation 1 we get
X3 = 5 – (4 – 2/3 X2 – 1/3 X4) – X2 which on simplification yields
X3 = 1 – 1/3 X2 + 1/3 X4. (4)
Z = 6 (4 – 2/3 X2 – 1/3 X4 ) + 5X2 = 24 + X2 – 2X4.

• Our objective is to maximize Z and this can be achieved by increasing X2 or by decreasing


X4.
• Decreasing X4 is not possible because already X4 is at zero and decreasing it will
make it infeasible.
• Increasing X2 is possible since it is at present at zero.

• From equation 3 we observe that X2 can be increased to 6 beyond which variable X1


would become negative and infeasible.
• From equation 4 we observe that X2 can be increased up to 3 beyond which variable X3
will be negative.
• The limiting value is 3 and variable X2 replaces variable X3 in the basis.
Iteration 3
Rewriting equation 4 in terms of X2 we get
1/3 X2 = 1 - X3 + 1/3 X4 . (5)
X2 = 3 - 3X3 + X4 .

Substituting for X2 in equation 3 gives us


X1 = 4 – 2/3 (3 - 3X3 + X4) - 1/3 X4 = 2 + 2X3 - X4 (6)
Z = 24 +3 – 3X3 + X4 – 2 X4 = 27 –3X3 – X4.

We would still like to increase Z and this is possible only if we can decrease X3 or X4
since both have a negative coefficient in Z.

Already both are non basic at zero and will only yield infeasible solutions if we
decrease them.

We observe that the optimum is reached since there is no entering variable. The
optimum solution is given by X1 = 2, X2 = 3 and Z = 27.
• We observe that the above method meets all our requirements but does some extra
computations in finding out the limiting value of the entering variable for every
iteration.
• Experience has indicated that this method is superior and computationally faster than the
algebraic method.

• The simplex method can be represented in a tabular form, where only the numbers
are written in a certain easily understandable form.
• Several forms are available but we present only one version of the simplex method in
tabular form.

Table 2.1 represents the simplex tableau for the first iteration

• A
Table 2.1
6 5 0 0
CB Basic X1 X2 X3 X4 RHS 
variables
0 X3 1 1 1 0 5 5
0 X4 3 2 0 1 12 4

Cj – Zj 6 5 0 0

The Cj – Zj for a variable is Cj minus the dot product of the CB and the
column corresponding to the variable j.
For example, C1 – Z1 = 6 – (0*1 + 0*3) = 6.

The variable with maximum positive value of Cj - Zj enters. In our


example it is variable X1 shown with an arrow.

The  values are the ratios between the RHS value and the coefficient
under the entering variable column.
In our example these are 5/1 = 5 and 12/3 = 4 respectively. The
minimum  is 4 and variable X4 is the leaving variable. Now variable
X replaces X as the basic variable in the next iteration.
In the previous iteration, we were solving for variables X3 and X4.

They had an identity matrix as their coefficients (or X3 and X4 appeared in one
equation only with a +1 coefficient), so that we can directly solve for them.

In the next iteration, we need to rewrite the constraints (rows) such that X3 and X1
have the identity matrix as coefficients.

We call the row corresponding to the leaving variable as pivot row and the
corresponding element in the entering column as the pivot element.

The pivot element is shown in bold in Table 2.1.

Rules for row operations in a simplex table


1.Rewrite the new pivot row by dividing every element in the row by the
pivot element.

2.For every non pivot row i rewrite every element j in this row as new aij =
old aij – ajl * akj where ajl is the element in the entering column l and row k is
the pivot row.
Table 2.2 shows the first two iterations of the simplex algorithm.

6 5 0 0
CB Basic X1 X2 X3 X4 RHS 
variables
0 X3 1 1 1 0 5 5
0 X4 3 2 0 1 12 4

Cj – Zj 6 5 0 0

0 X3 0 1/3 1 -1/3 1 3
6 X1 1 2/3 0 1/3 4 6
Cj – Zj 0 1 0 -2 24

Here variable X2 with a positive value of Cj –Zj enters the basis.


The  values are 1 ÷1/3 = 3 and 4 ÷ 2/3 = 6 respectively.
Since minimum  = 3 from first row, variable X3 leaves the basis and is replaced
by X2.
The row operations resulting in the identity matrix for columns X2 and X3 are
carried out as explained before.
Table 2.3 shows the third iteration

6 5 0 0
CB Basic X1 X2 X3 X4 RHS 
variables
5 X2 0 1 3 -1 3
6 X1 1 0 -2 1 2
Cj – Zj 0 0 -3 -1 27

The Cj – Zj values for the non basic variables are –3 and –1 and are
negative. The algorithm terminates with the solution X1 = 2, X2 = 3 with Z
= 27 as the optimal solution

We observe that the increase in the objective function value in every


iteration is the product of the Cj – Zj value of the entering variable and the
minimum  corresponding to the leaving variable.
Example 2
Consider the example
Maximize 8X1 + 6X2
Subject to
X1 + X2  10
2X1 + 3X2  25
X1 + 5X2  35
X1, X2  0

Adding slack variables, we get

Maximize 8X1 + 6X2 +0X3 + 0X4 + 0X5


Subject to
X1 + X2 + X3 = 10
2X1 + 3X2 + X4 = 25
X1 + 5X2 + X5 = 35
X1, X2, X3, X4, X5  0
6 8 0 0 0
X1 X2 X3 X4 X5 RHS 
0 X3 1 1 1 0 0 10 10
0 X4 2 3 0 1 0 25 25/3
0 X5 1 5 0 0 1 35 7

Cj –Zj 6 8 0 0 0 0
0 X3 4/5 0 1 0 -1/5 3 15/4
0 X4 7/5 0 0 1 -3/5 4 20/7
8 X2 1/5 1 0 0 1/5 7 35

Cj –Zj 22/5 0 0 0 -8/5 56


0 X3 0 0 1 -4/7 1/7 5/7 5
6 X1 1 0 0 5/7 -3/7 20/7 --
8 X2 0 1 0 -1/7 2/7 45/7 45/2

Cj –Zj 0 0 0 -22/7 2/7 480/7


0 X5 0 0 7 -4 1 5
6 X1 1 0 3 -1 0 5
8 X2 0 1 -2 1 0 5
The important observations from the table are as follows:

• The variable X5 leaves the simplex table in the first iteration but enters the table again as a basic
variable in the fourth (last) iteration.
• This can happen frequently in simplex iterations.
• A variable that leaves the table can enter the basis again in a subsequent iteration.
• This also indicates that there is no upper limit for the number of iterations.

• If the condition that a leaving variable does not enter again holds, then it is possible to define an
upper limit on the number of iterations. Unfortunately this is not true.

• Every iteration is characterized by the set of basic variables and not by the presence or absence
of a single variable.

• In the third iteration we have not computed all the values of .


• There is one place where we have to divide the RHS value (20/7) by a negative number (-3/7).
• We do not compute this value of . We have to compute values of , only when they are strictly non
negative– not when they are negative or indeterminate.

• This is because a negative value or infinity of  indicates that for any value of the entering
variable, the leaving variable does not become negative.
The three important steps (or stages) in the simplex algorithm are
• The algorithm starts with a basic feasible solution (Initialization). The Right Hand Side
(RHS) values are always non negative.

• The algorithm goes through the intermediate iterations where solution with better
values of the objective function are found (Iteration)

• The algorithm terminates when there is no entering variable to provide the optimal
solution (Termination)

 Are there issues that have to be explicitly considered?

 Do all Linear programming problems have optimal solutions?

 Does the simplex algorithm always terminate by providing optimal solution?

• Let us look at these and other issues in Initialization, iteration and termination in
detail.
Initialization
Let us consider an example
Minimize Z = 3X1 + 4X2
Subject to
2X1 + 3X2  8
5X1 + 2X2  12
X1, X2  0
The first step is to convert the inequalities into equations.

We add variables X3 and X4, called negative slack (or surplus) variables that have to be subtracted from
the left hand side values to equate to the RHS values.

These surplus variables are also defined to be  0 (non negative). The problem now becomes

Minimize Z = 3X1 + 4X2 + 0X3 + 0X4


Subject to
2X1 + 3X2 - X3 = 8
5X1 + 2X2 - X4 = 12
X1, X2, X3, X4  0

• Once again we have four variables and two equations (constraints). We need a starting basic feasible
solution to start the simplex algorithm.

• The easiest among the basic solutions is to fix the surplus variables as basic variables and solve the
problem.

• Such a solution is easily readable because each surplus variable appears only in one constraint and has a
+1 coefficient.

• There is an identity coefficient matrix associated with the surplus variables.


• Unfortunately this solution is X3 = -8 and X4 = -12 which is infeasible. This is not a
basic feasible solution and cannot be used as an initial solution to start the simplex
table.

• In general, when the constraints are inequalities, starting the simplex algorithm with
surplus variables, basic is not possible because we will be starting with an infeasible
solution.

• We therefore have to identify a starting basic feasible solution which has some
other set of basic variables.

• One way to do this is to follow the algebraic method till we get a feasible solution
and start with this as the first iteration for the simplex method.
• We don’t follow this approach because it may take many trials in an algebraic method
before we get a first feasible solution.

• We normally use an indirect approach to get the starting solution for the simplex
table.
If we introduced two new variables arbitrarily (say, X5 and X6) such that
2X1 + 3X2 - X3 + X5 = 8
5X1 + 2X2 - X4 + X6 = 12

variables X5 and X6 automatically provide us with a starting basic feasible solution with X5 = 8 and
X6 = 12. This is basic feasible and we can proceed with the simplex table.

We should understand that variables X5 and X6 are not part of the original problem and have been
introduced by us to get a starting basic feasible solution.
• These are called artificial variables and have been introduced for the specific purpose of
staring the simplex table.

These are now denoted by a1 and a2. Since they are not part of the original problem, we have to
ensure that they should not be in the optimal solution (when we find the optimum).

We ensure this by giving a very large and positive value (say, 10000) to the objective function
value.

The problem now becomes


Minimize Z = 3X1 + 4X2 + 0X3 + 0X4 + 10000a1 + 10000a2
Subject to
2X1 + 3X2 - X3 + a1 = 8
5X1 + 2X2 - X4 + a2 = 12
X1, X2, X3, X4, a1, a2  0
• If the given problem has an optimal solution, it will not involve a1 or a2 because a1 and a2, the artificial
variables are not part of the original problem.

• Every basic feasible solution to the new problem (with the artificial variables) will have an objective
function value more than every basic feasible solution without either of the artificial variables, because
the artificial variables have a very large and positive contribution to the minimization objective function.

• Therefore providing a large positive value to the objective function coefficient of the artificial variable
ensures that the artificial variables do not appear in the optimal solution (if it exists).

• We define the large and positive value to the objective function coefficient of the artificial variable as M
(big M), which is large and positive and tending to infinity. We have
M * constant = M
M  constant = M

• Now we begin the simplex table to start solving the problem. Since our standard problem has a
maximization function, we multiply the coefficients of the objective function by –1 and convert it to a
maximization problem.

The problem now becomes


Maximize Z = = -(3X1 + 4X2 + 0X3 + 0X4 + Ma1 + Ma2)
Subject to
2X1 + 3X2 - X3 + a1 = 8
5X1 + 2X2 - X4 + a2 = 12
X1, X2, X3, X4, a1, a2  0
The simplex iteration are shown in Table 2.5

-3 -4 0 0 -M -M
X1 X2 X3 X4 a1 a2 RHS 
-M a1 2 3 -1 0 1 0 8 4
-M a2 5 2 0 -1 0 1 12 12/5

Cj –Zj 5M-3 5M-4 -M -M 0 0

-M a1 0 11/5 -1 2/5 1 -2/5 16/5 16/11


-3 X1 1 2/5 0 -1/5 0 1/5 12/5 6

Cj –Zj 0 11/5M+14/5 -M -2/5M-3/5 0 -7/5M+3/5

-4 X2 0 1 -5/11 2/11 5/11 -2/11 16/11


-3 X1 1 0 2/11 -3/11 -2/11 3/11 20/11

Cj –Zj 0 0 -14/11 -1/11 -ve -ve -124/11


The optimum solution is X1 = 20/11, X2 = 16/11 and Z = 124/11. The simplex table
will show a negative value because we have solved a maximization problem by
multiplying the objective function by –1.
Some important observations are:

• It is easier to introduce artificial variables when the problem does not have a visible
initial basic feasible solution.
• We introduce as many artificial variables as the number of constraints.
• The number of variables increases and not the number of constraints.
• The big M ensures that the artificial variables do not appear in the optimal solution (if one
exists).

• Since we have two artificial variables as the starting basic variables, we need a
minimum of two iterations to find the optimum, since they have to leave the basis.

• We need not evaluate the Cj –Zj values corresponding to the artificial variables at all
because we don’t want the artificial variable to enter the basis. They have been
shown as negative in the Table.
• There is another method called Two-phase method from which we can get an initial basic
feasible solution for the simplex algorithm using artificial variables. This is explained below for
the same example.

• Here, the artificial variables have an objective function coefficient of –1 (Maximization). The
other variables have an objective function coefficient of zero. The simplex table (2.6) is shown
below:
0 0 0 0 -1 -1
X1 X2 X3 X4 a1 a2 RHS 
-1 a1­ 2 3 -1 0 1 0 8 4
-1 a2 5 2 0 -1 0 1 12 12/5
Cj –Zj 5 5 -1 -1 0 0

-1 a1­ 0 11/5 -1 2/5 1 -2/5 16/5 16/11


0 X1 1 2/5 0 -1/5 0 1/5 12/5 6
Cj –Zj 0 11/5 -1 -2/5 0 -7/5

0 X2 0 1 -5/11 2/11 5/11 -2/11 16/11


0 X1 1 0 2/11 -3/11 -2/11 3/11 20/11
Having obtained the optimum for the first phase with X1 = 20/11 and
X2 = 16/11, we can start the second phase of the simplex algorithm from the
first phase by eliminating the artificial variables. The Table (2.7) for the second
phase becomes

-3 -4 0 0

X1 X2 X3 X4 RHS

-4 X2 0 1 -5/11 2/11 16/11


-3 X1 1 0 2/11 -3/11 20/11
Cj –Zj 0 0 -14/11 -1/11 -124/11

For our example, we realize that the starting basic feasible


solution without artificial variables at the end of the first
phase is optimal.
If it is not, then we proceed with the simplex iterations to
get the optimal solution.
Iteration
• During iteration, only one issue needs to be addressed. Let us explain this
with an example.

Example
Maximize Z = 4X1 + 3X2
Subject to
2X1 + 3X2  8
3X1 + 2X2  12
X1 , X 2  0

Adding slack variables X3 and X4, we get


2X1 + 3X2 + X3 = 8
3X1 + 2X2 + X4 = 12
We set up the simplex table with X3 and X4 as basic variables.

The simplex iterations are shown below in Table 2.8


Initialization deals with getting an initial basic feasible solution for the given problem.
 Identifying a set of basic variables with an identity coefficient matrix is the outcome of the initialization
process. We have to consider the following aspects of initialization (in the same order as stated)
 RHS values
 Variables
 Objective function
 Constraints

Let us consider each of them in detail.

The Right hand side (RHS) value of every constraint should be non negative. It is usually a
rational number. If it is negative, we have to multiply the constraint by –1 to make the RHS non
negative. The sign of the inequality will change.

The variables can be of three types -  type,  type and unrestricted.

Of the three the  type is desirable. If we have the  type variable, we replace it with another
variable of the  type as follows:
 If variable Xk is  0, we replace it with variable Xp = -Xk and Xp  0. This change is incorporated in all the
constraints as well as in the objective function.
• The objective function can be either maximization or minimization.

• If it is minimization, we multiply it with a –1 and convert it to a


maximization problem and solve.

• Constraints are of three types -  type,  type and equation.


• If a constraint is of  type, we add a slack variable an convert it to an
equation.
• If it is of  type, we add a surplus variable (negative slack) and convert it to
an equation.
• If necessary we add artificial variables to identify a starting basic feasible
solution. We illustrate this using some examples
Example
Maximize Z = 7X1 + 5X2
Subject to
2X1 + 3X2 = 7
5X1 + 2X2  11
X1, X2  0

We convert the second constraint into an equation by adding a negative slack variable X3. The
equations are
2X1 + 3X2 = 7
5X1 + 2X2 - X3 = 11

The constraint coefficient matrix is


23 0
5 2 –1

We don’t find variables with coefficients as in an identity matrix. We have to add two artificial
variables a1 and a2 to get
2X1 + 3X2 + a1 = 7
5X1 + 2X2 - X3 + a2 = 11

We have to start with a1 and a2 as basic variables and use the big M method.
Example
Maximize Z = 7X1 + 5X2 + 8X3 + 6X4
Subject to
2X1 + 3X2 + X3 = 7
5X1 + 2X2 + X4  11
X1, X2, X3, X4  0

• In this example, we add surplus variable X5 to the second constraint to convert it to an


equation. We get

2X1 + 3X2 + X3 = 7
5X1 + 2X2 + X4 –X5 = 11

• We observe that variables X3 and X4 have coefficients of the identity matrix and we
can start with these as initial basic variables to have a basic feasible solution. We need
not use artificial variables in this case.

• If the second constraint was 5X1 + 2X2 + 2X4  11, we can write it as 5/2 X1 + X2 + X4 
11/2 and then add the surplus variable and choose X4 as a starting basic variable.
Adding artificial variables

1. Ensure that the RHS value of every constraint is non negative

2. If we have a  constraint, we add a slack variable. This


automatically qualifies to be an initial basic variable.

3. If we have a  constraint, we add a negative slack to convert it to


an equation. This negative slack cannot qualify to be an initial
basic variable.

4. In the system of equations identify whether there exist variables


with coefficients corresponding to the column of the identity matrix.
Such variables qualify to be basic variables. Add minimum artificial
variables otherwise to get a starting basic feasible solution.
Iteration
During iteration, only one issue needs to be addressed. Let us explain this with an
example.

Example

Maximize Z = 4X1 + 3X2


Subject to
2X1 + 3X2  8
3X1 + 2X2  12
X1 , X2  0

Adding slack variables X3 and X4, we get

2X1 + 3X2 + X3 = 8
3X1 + 2X2 + X4 = 12

We set up the simplex table with X3 and X4 as basic variables. The simplex iterations are
shown below in Table 2.8
4 3 0 0
CB Basic X1 X2 X3 X4 RHS 
variables
0 X3 2 3 1 0 8 4
0 X4 3 2 0 1 12 4

Cj – Z j 4 3 0 0 0

0 X3 0 5/3 1 -2 0 0
4 X1 1 2/3 0 1 4 6

Cj – Z j 0 1/3 0 -4 16

3 X2 0 1 3/5 -6/5 0
4 X1 1 0 -2/5 9/5 4

Cj – Z j 0 0 -1/5 -18/5 16
• In the above simplex table, there was a tie for the leaving variable.
• We had a choice between X3 and X4. We chose to leave out X4.
• This tie resulted in a basic variable getting value zero in the next iteration.
• Although the optimal solution was found in the second iteration, it was not apparent
then. We performed one more iteration without increasing the objective function but
to obtain the optimality condition.

• Whenever there is a tie for the leaving variable, we end up performing


some extra iterations that do not increase the value of the objective
function. They cannot be avoided.
• This phenomenon is called degeneracy. In this example, degeneracy occurs at
the optimum.
• Degeneracy results in extra iterations that do not improve the objective function
value.
• Since the tie for the leaving variable, leaves a variable with zero value in the next iteration,
we do not have an increase in the objective function value
(Note that the increase in the objective function is the product of the Cj – Zj and  and the
minimum  takes the value zero).

• Sometimes degeneracy can take place in the intermediate iterations.


• In such cases, if the optimum exists, the simplex algorithm will come out of degeneracy by
itself and terminate at the optimum.
• In these cases, the entering column will have a zero (or negative) value against the leaving
row and hence that  will not be computed, resulting in a positive value of the minimum )

• There is no proven way to eliminate degeneracy or to avoid it. Sometimes a different


tie breaking rule can result in a non degenerate solution.
• In this example if we had chosen to leave X3 instead of X4 in the first iteration, the algorithm
terminates and gives the optimum after one iteration
Termination
There are four aspects to be addressed while discussing termination conditions.
These are

Alternate optimum
Unboundedness
Infeasibility
Cycling

Let us consider each through an example


Example
Maximize Z = 4X1 +3X2
Subject to
8X1 + 6X2  25
3X1 + 4X2  15
X1, X2  0

Adding slack variables X3 and X4, we can start the simplex iteration with X3 and X4 as
basic variables. This is shown in Table 2.9
4 3 0 0
CB Basic X1 X2 X3 X4 RHS 
variables
0 X3 8 6 1 0 25 25/8
0 X4 3 4 0 1 15 5

Cj – Zj 4 3 0 0 0

4 X1 1 ¾ 1/8 0 25/8 25/6


0 X4 0 7/4 -3/8 1 45/8 45/14

Cj – Zj 0 0 -1/2 0 25/2

4 X1 1 0 2/7 -3/7 5/7


3 X2 0 1 -3/14 4/7 45/14

Cj – Zj 0 0 -1/2 0 25/2
• At the end of the first iteration we observe that the non basic variables (X2 and X3) have non
positive values of Cj - Zj, indicating that the optimum solution has been reached.

• However, one of the non basic variables X2 has a Cj – Zj value of zero. If we enter this, we
get another optimum solution with the same value of the objective function.

• Now non basic variable X3 has C3 – Z3 = 0 and if we enter X3 and iterate, we get the table
as in iteration 2.

• It looks as though the simplex algorithm seems to be getting into an infinite loop
although the optimum has been found.

• This phenomenon is called alternate optimum.


• This happens when one of the constraints is parallel to the objective function.
• Actually there are infinite number of alternate optimum solutions for this problem, but the
simplex algorithm shows only two of them corresponding to the corner points.
• Every point on the line joining these two solutions is also optimal.
• The advantage of computing the alternate cornet point optima is that one of them
may be chosen for implementation after considering other aspects.

• For example the solution with X1 = 25/8 can result in usage of lesser resources since
resource X4 is basic.

• The solution X1 = 5/7, X2 = 45/14 uses all the resources. Sometimes one of the solution
may be integer valued resulting in immediate acceptance for implementation.

Example
Maximize Z = 4X1 +3X2
Subject to
X1 -6X2  5
3X1  11
X1, X2  0

• Adding slack variables X3 and X4, we can start the simplex iteration with X3 and X4 as
basic variables. This is shown in Table 2.10
• At the end of the first iteration, we observe that variable X2 with C2 – Z2 = 3 can
enter the basis but we are unable to fix the leaving variable because all the
coefficients in the entering column are  0.
• The algorithm terminates because it is unable to find a leaving variable.

• This phenomenon is called unboundedness, indicating that the variable X2 can


take any value and still none of the present basic variables would become
infeasible.
• By the nature of the first constraint, we can observe that X2 can be increased to any
value and yet the constraints are feasible.
• The value of the objective function is infinity.

• In all simplex iterations, we enter the variable with the maximum positive value of
Cj – Zj. Based on this rule, we entered variable X1 in the first iteration.
• Variable X2 also with a positive value of 3 is a candidate and if we had decided to enter
X2 in the first iteration we would have realized the unboundedness at the first iteration
itself.
Though, most of the times we enter a variable based on largest
coefficient rule (largest Cj - Zj), there is no guarantee that this rule
terminates with minimum iterations.
Any non basic variable with a positive value of Cj – Zj is a candidate to
enter.

Other rules for entering variable are


1. Largest increase rule. Here for every candidate for entering variable, the
corresponding minimum  is found and the increase in the objective
function i.e. the product of  and Cj – Zj is found.
• The variable with the maximum increase (product) is chosen as entering
variable.

2. First positive Cj – Zj.

3. Random – A non basic variable is chosen randomly and the value of Cj – Zj is


computed.
• It becomes the entering variable if the Cj – Zj is positive.
• Otherwise another variable is chosen randomly. This is repeated till an entering
variable is found.
• Coming back to unboundedness, we observe that unboundedness is caused when the
feasible region is not bounded.
• Sometimes, the nature of the objective function can be such that even if the feasible region
is unbounded, the problem may have an optimum solution.
• The unboundedness defined here means that there is no finite optimum solution and the
problem is unbounded.

• One more aspect to be considered is called infeasibility.


• Can we have a situation where the linear programming problem does not have a solution at
all? Let us consider the following example:

Maximize Z = 4X1 +3X2


Subject to
X1 + 4X2  3
3X1 + X2  12
X1, X2  0

• Adding slack variables X3 and X4 (surplus) and artificial variable a1 we can start the
simplex algorithm using the big M method with X3 and a1 as basic variables
4 3 0 0 -M
CB Basic X1 X2 X3 X4 a1 RHS 
variables
0 X3 1 4 1 0 0 3 3
-M a1 3 1 0 -1 1 12 4

CJ – Z J 3M+4 M+3 0 -M 0

4 X1 1 4 1 0 0 3
-M a1 0 -11 -3 -1 1 3

C j – Zj 0 -11M-13 -3M-4 -M 0
• Here the algorithm terminates when all the non basic variables X2, X3 and X4 have
negative values of Cj – Zj. The optimality condition seems to be satisfied but an
artificial variable is in the basis.
• This means that the problem is infeasible and does not have a feasible solution.

• Infeasibility is indicated by the presence of at least one artificial variable after


the optimum condition is satisfied.

• In this problem a1 = 3 indicates that the second constraint should have the RHS value
reduced by 3 to get a feasible solution with X1 = 3.
• Simplex algorithm not only is capable of detecting infeasibility but also shows the
extent of infeasibility.
Termination Conditions (Maximization objective)

• All non basic variable have negative values of Cj – Zj.


• Basic variables are either decision variables or slack variables. Algorithm terminates
indicating unique optimum solution.

• Basic variables are either decision variables or slack variables. All non basic
variables have Cj – Zj  0.
• At least one non basic variable has Cj – Zj = 0. Indicates alternate optimum. Proceed to
find the other corner point and terminate.

• Basic variables are either decision variables or slack variables.


• The algorithm identifies an entering variable but is unable to identify leaving variable
because all values in the entering column are  0. Indicates unboundedness and
algorithm terminates.

• All non basic variables have Cj – Zj  0. Artificial variable still exists in the basis.
• Indicates infeasiblity.
• Algorithm terminates.
Cycling
If the simplex algorithm fails to terminate (based on the above conditions) then it cycles.

Special examples
Simplex method can be used to solve simultaneous linear equations (if the solution has
nonnegative values). Let us consider an example

Solve
4X1 + 3X2 = 25
2X1 + X2 = 11

We add artificial variables a1 and a2 and rewrite the equations as


4X1 + 3X2 + a1 = 25
2X1 + X2 + a2 = 11

We define an objective function Minimize a1 + a2.

If the original equations have a non negative solution, then we should have feasible basis with
X1 and X2 having Z=0 for the linear programming problem. The simplex iterations are shown in
Table 2.12
0 0 -1 -1
X1 X2 a1 a2 RHS 

-1 a1 4 3 1 0 25 25/4
-1 A2 2 1 0 1 11 11/2

Cj –Zj 6 4 0 0 -36

-1 a1 0 1 1 -2 3 3
0 X1 1 ½ 0 ½ 11/2 11

Cj –Zj 0 1 0 -1 -3

0 X2 0 1 1 -2 3
0 X1 1 0 -1/2 3/2 4

Cj –Zj 0 0 -1 -1 0
• The optimum solution X1 = 4, X2 = 3 is the solution to the simultaneous
linear equations.

• Simplex method can also detect linear dependency among equations. Let us
consider the following example.
2X1 +3X2 = 6
4X1 +6X2 = 12

• As in the previous example, we add artificial variables a1 and a2. We write


an objective function Minimize a1 + a2 and set up the simplex table with a1
and a2 as basic variables.

• The simplex iterations are shown in Table 2.13


0 0 -1 -1
X1 X2 a1 a2 RHS 
-1 a1 2 3 1 0 6 2
-1 A2 4 6 0 1 12 3
Cj –Zj 6 9 0 0 -21

0 X2 2/3 1 1/3 0 2 3
-1 a2 0 0 -2 1 0 --
Cj –Zj 0 0 -3 0 0

0 X1 1 3/2 ½ 0 3
-1 a2 0 0 -2 1 0
Cj –Zj 0 0 -3 0 0
• The simplex algorithm has all Cj – Zj  0 at the end of the first iteration. An artificial
variable exists in the basis with value zero.
• This indicates a linearly dependent system.

• Also variable X1 can enter the basis (as in alternate optimum) and we will get the solution in the
next iteration. Presence of artificial variable with value zero at the optimum indicates a linearly
dependent system.

Let us consider another example to illustrate an unrestricted variable.

Maximize 4X1 + 5 X2
Subject to
2X1 + 3X2  8
X1 + 4X2  10
X1 unrestricted, X2  0

We replace variable X1 by X3 – X4. We add slack variables X5 and X6 to get


Maximize 4X3 – 4X4 + 5 X2
Subject to
2X3 – 2X4 + 3X2 +X5  8
X3 –X4 + 4X2 + X6  10
The simplex iterations are shown in table 2.14
4 -4 5 0 0
X3 X4 X2 X5 X6 RHS 
0 X5­ 2 -2 3 1 0 8 8/3
0 X6 1 -1 4 0 1 10 5/2
Cj –Zj 4 -4 5 0 0 0

0 X5 5/4 -5/4 0 1 -3/4 1/2 2/5


5 X2 1/4 -1/4 1 0 1/4 5/2 10
Cj –Zj 11/4 -11/4 0 0 -5/4 25/2

4 X3 1 -1 0 4/5 -3/5 2/5 --


5 X2 0 0 1 -1/5 2/5 12/5 6
Cj –Zj 0 0 0 -11/5 2/5 68/5

4 X3 1 -1 3/2 1/2 0 4
0 X 0 0 5/2 -1/2 1 6
• Here the optimum solution is X3 =4 Z=16, indicating that the original variable X1 =
4 at the optimum.
• However, we observe that variable X4 has C4 – Z4 = 0 and can enter.
• We also realize that there is no leaving variable.

• Does this indicate unboundedness?

• The answer is No.


• This is because the original unrestricted variable has been rewritten as two variables.
• X3 = 4, X4 = 0 indicates that original variable X1 = 4. Incidentally X1 = 4 can be
represented by any combination of X3 and X4 such that X3 – X4 = 4 and X3, X4  0.

• This is what is implied by variable X4 trying to enter the basis and indicating that it can
take any positive value.
• Whenever we have an unrestricted variable in the original problem that has been
written as the difference of two variables, this phenomenon will occur.

• If the original unrestricted variable is in the optimum solution, then one of the two will
take a positive value at the optimum and the other will try to enter and represent
unbounded-ness because it can take any value and adequately retain the value of the
original variable at the optimum.

• It is to be noted that both X3 and X4 will not be basic at the optimum.

• Either one of them will be in the basis if the original variable is in the solution.

• Both will be non basic if the original unrestricted variable is not in the optimum
solution.

You might also like