0% found this document useful (0 votes)
61 views2 pages

3 Linear Programming: Instance: Task

Linear programming involves maximizing or minimizing a linear objective function subject to linear constraints. The set of feasible solutions forms a polyhedron. There are two possibilities if a linear program has no solution: it can be infeasible if the feasible set is empty, or it can be unbounded if the objective function is unbounded from above over the feasible set. Otherwise, an optimal solution exists that maximizes the objective function over the feasible set.

Uploaded by

Borhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views2 pages

3 Linear Programming: Instance: Task

Linear programming involves maximizing or minimizing a linear objective function subject to linear constraints. The set of feasible solutions forms a polyhedron. There are two possibilities if a linear program has no solution: it can be infeasible if the feasible set is empty, or it can be unbounded if the objective function is unbounded from above over the feasible set. Otherwise, an optimal solution exists that maximizes the objective function over the feasible set.

Uploaded by

Borhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

3 Linear Programming

In this chapter we review the most important facts about Linear Programming.
Although this chapter is self-contained, it cannot be considered to be a compre-
hensive treatment of the field. The reader unfamiliar with Linear Programming is
referred to the textbooks mentioned at the end of this chapter.
The general problem reads as follows:

LINEAR PROGRAMMING
Instance: A matrix A 2 Rmn and column vectors b 2 Rm ; c 2 Rn .
Task: Find a column vector x 2 Rn such that Ax  b and c > x is maxi-
mum, decide that fx 2 Rn W Ax  bg is empty, or decide that for all
˛ 2 R there is an x 2 Rn with Ax  b and c > x > ˛.

Here c > x denotes the scalar product of the vectors. The notion x  y for vectors
x and y (of equal size) means that the inequality holds in each component. If no
sizes are specified, the matrices and vectors are always assumed to be compatible in
size. We often omit indicating the transposition of column vectors and write e.g. cx
for the scalar product. By 0 we denote the number zero as well as all-zero vectors
and all-zero matrices (the order will always be clear from the context).
A linear program (LP) is an instance of the above problem. We often write a
linear program as maxfcx W Ax  bg. A feasible solution of an LP maxfcx W Ax 
bg is a vector x with Ax  b. A feasible solution attaining the maximum is called
an optimum solution.
As the problem formulation indicates, there are two possibilities when an LP has
no solution: The problem can be infeasible (i.e. P WD fx 2 Rn W Ax  bg D ;) or
unbounded (i.e. for all ˛ 2 R there is an x 2 P with cx > ˛). If an LP is neither
infeasible nor unbounded it has an optimum solution:

Proposition 3.1. Let P D fx 2 Rn W Ax  bg 6D ; and c 2 Rn with ı WD


supfc > x W x 2 P g < 1. Then there exists a vector ´ 2 P with c > ´ D ı.

Proof: Let U be a matrix whose columns are an orthonormal  basis


 of the kernel
 of
A, i.e. U > U D I , AU D 0, and rank.A0 / D n where A0 WD UA> . Let b 0 WD b0 .
We show that for every y 2 P there exists a subsystem A00 x  b 00 of A0 x  b 0
such that A00 is nonsingular, y 0 WD .A00 /1 b 00 2 P , and c > y 0  c > y. As there are

B. Korte and J. Vygen Combinatorial Optimization, 51


Algorithms and Combinatorics 21, DOI 10.1007/978-3-642-24488-9__3,

c Springer-Verlag Berlin Heidelberg 2012
52 3 Linear Programming

only finitely many such subsystems, one of these y 0 attains the maximum (c > y 0 D
ı), and the assertion follows.
So let y 2 P , and denote by k.y/ the rank of A00 for the maximal subsystem
A00 x  b 00 of A0 x  b 0 with A00 y D b 00 . Suppose that k.y/ < n. We show how to
find a y 0 2 P with c > y 0  c > y and k.y 0 / > k.y/. After at most n steps we have a
vector y 0 with k.y 0 / D n as required.
If U > y 6D 0, we set y 0 WD y  U U > y. Since y C U U > c 2 P for all  2 R
we have supfc > .y C U U > c/ W  2 Rg  ı < 1 and hence c > U D 0 and c > y 0 D
c > y. Moreover, Ay 0 D Ay  AU U > y D Ay and U > y 0 D U > y  U > U U > y D 0.
Now suppose that U > y D 0. Let v 6D 0 with nA00 v D 0. Denote o by
ˇi ai y
ai x  ˇi the i -th row of Ax  b. Let  WD min a v W ai v > 0 and
n o i
ˇi ai y
 WD max ai v W ai v < 0 , where min ; D 1 and max ; D 1. We have
  0  , and at least one of  and  is finite (because A0 v 6D 0 but U > v D 0).
For  2 R with      we have A00 .y C v/ D A00 y C A00 v D A00 y D b 00
and A.yCv/ D AyCAv  b, i.e. yCv 2 P . Thus, as supfc > x W x 2 P g < 1,
we have  < 1 if c > v > 0 and  > 1 if c > v < 0.
Moreover, if c > v  0 and  < 1, we have ai .y C v/ D ˇi for some i .
Analogously, if c > v  0 and  > 1, we have ai .y C v/ D ˇi for some i .
Thus in each case we have found a vector y 0 2 P with c > y 0  c > y and k.y 0 / 
k.y/ C 1. 
This justifies the notation maxfc > x W Ax  bg instead of supfc > x W Ax  bg.
Many combinatorial optimization problems can be formulated as LPs. To do
this, we encode the feasible solutions as vectors in Rn for some n. In Section 3.5 we
show that one can optimize a linear objective function over a finite set S of vectors
by solving a linear program. Although the feasible set of this LP contains not only
the vectors in S but also all their convex combinations, one can show that among
the optimum solutions there is always an element of S .
In Section 3.1 we compile some terminology and basic facts about polyhedra,
the sets P D fx 2 Rn W Ax  bg of feasible solutions of LPs. In Sections 3.2
and 3.3 we present the SIMPLEX ALGORITHM, which we also use to derive the
Duality Theorem and related results (Section 3.4). LP duality is a most important
concept which explicitly or implicitly appears in almost all areas of combinatorial
optimization; we shall often refer to the results in Sections 3.4 and 3.5.

3.1 Polyhedra
Linear Programming deals with maximizing or minimizing a linear objective func-
tion of finitely many variables subject to finitely many linear inequalities. So the
set of feasible solutions is the intersection of finitely many halfspaces. Such a set is
called a polyhedron:

You might also like