Graph Theory for Undergraduates
Graph Theory for Undergraduates
D. Yogeshwaran
Indian Statistical Institute, Bangalore.
Index v
2 Introduction to Graphs 3
2.1 Definition and some motivating examples . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Popular examples of graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Some history and more motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Course overview : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Flows, Matchings and Games on Graphs . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Graphs and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.3 Random graphs and probabilistic method. . . . . . . . . . . . . . . . . . . . . . 12
4 Spanning Trees 21
4.1 Trees and Cayley’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Minimal spanning trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Kruskal and other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4 ***Some questions : Random spanning trees*** . . . . . . . . . . . . . . . . . . . . . 27
iii
iv Contents
10 Planar graphs 69
10.1 Euler’s formula and Kuratowski’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.2 Coloring of Planar graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
10.3 ***Graph dual**** . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
10.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
10.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Index
v
vi Index
Kuratowski’s theorem, 70
Laplacian matrix, 62
Mantel’s theorem, 29
matching, 35
matroid, 58
max-flow min-cut theorem, 45
Menger’s theorem, 50
Minimal Spanning Tree, 24
Minimal spanning tree algorithms, 25
Multi-graphs, 3
Neighbourhood, 15
Path, 16
perfect matching, 35
planar graph, 69
probabilistic method, 32
Prufer code, 21
Ramsey numbers, 32
random spanning trees, 27
Read’s conjecture, 58
Real-life graphs, 4
Regular graph, 15
Rota-Welsh conjecture, 58
Shannon rate, 42
Spanning subgraph, 15
Stable matching, 42
vertex connectivity, 49
walk, 16
Chapter 1
• These notes are highly incomplete, not well-written, not well ’latexed’ and not meant to be taken
seriously like a book.
• The notes might fail on many aesthetic counts and the author has put little effort to ensure that
the notes are more pleasing to the eye.
• These are written for the introductory course on graph theory to second year undergraduate
students. http://www.isibang.ac.in/~adean/infsys/database/Bmath/GT.html
• These are compendium of some of the class material (mostly definitions and result statements)
to serve as a pointer to students. I hope to add proof details over the years.
• All figures are taken from various sources and sorry for not acknowledging them all.
• I am thankful to students of the course in the years 2018 and 2019 for pointing out errors,
inaccuracies and also listening to my lectures :-)
• Unless mentioned, the proofs or results are all borrowed from some book or the other. Few of
them are listed in the bibliography.
• Do not look for any new results or proofs or something else fancy here. Just stuff arranged
according to my own teaching convenience.
• I have tried to mention some related research-level questions and open problems at the end of
each chapter.
• The main references are [Bollobas 2013, Van Lint and Wilson, West 2001, Diestel 2000].
1
2 Disclaimers and Warnings
Chapter 2
Introduction to Graphs
Definition 2.1.1. (Graph). A (simple) graph G consists of a finite or countable vertex set V := V (G)
and an edge set E := E(G) ⊂ V2 .
We shall consider only locally-finite graphs i.e., graphs such that vertices occur only
in finitely many edge-pairs.
For a vertex set V , we represent edges as (v, w), v, w ∈ V. We also write v ∼ w to denote that
(v, w) ∈ E. A very common pictorial representation of graphs is as follows : Vertices are represented
as points on plane and edges are lines / curves between the two vertices. See Figure 2.11 . As an
exercise, explicitly define the graphs based on these representations.
See these two talks by Hugo Touchette
(http://www.physics.sun.ac.za/~htouchette/archive/talks/networks-short1.pdf ) and
(http://www.physics.sun.ac.za/~htouchette/archive/talks/complexnetworks1.pdf ) for many
examples and applications of graphs in real-life problems. We shall briefly touch upon these in our
examples as well.
The following two variants shall be hinted upon in our motivation and some examples but we shall
not discuss the same for most of the course.
Directed graphs: These are graphs with directed edges or equivalently the edge-pairs are ordered
Multi-graphs: These are graphs with multiple edges between vertices including self-loops.
1
All of the figures in these notes are not mine and taken from the internet
3
4 Introduction to Graphs
Example 2.1.3. (Facebook graph) V is the set of Facebook users and an edge is places between two
vertices if they are friends of each other. See Figure 2.1.3
Example 2.1.4. (Road networks) V is the set of cities and an edge represents roads/trains/air-routes
between the cities. See Figure 2.1.4
Example 2.1.5. Collaboration graph V is the set of all mathematicians who have published (say listed
on mathscinet) and an edge represents that two mathematicians have collaborated on a paper together.
Example 2.1.8. (Traffic Networks) G is the road network with the weight w denoting average traffic
in a day. See Figure 2.1.8
Example 2.1.9. (Football graph) This is an example of weighted directed graph. Let the vertex set
be the 11 players in a team and directed edge from i to j represents that player i has passed to player
j. Associated with such a directed edge (i, j), a weight w(i, j) that denotes the number of passes from
player i to player j. See the football graph from the Spain vs Holland final in 2010 WC in Figure 2.1.9
This illustrates best the appealing visualization offered by graphs. Spain’s passing game is very evident
in the graph and gives a good way to analyse such effects in sports and other domains.
Example 2.2.2. (Electrical Networks. Kirchoff, 1847) Electrical networks can be represented as
weighted directed graphs with current and resistance viewed as weights. This formalism can explain
6 Introduction to Graphs
Figure 2.8: Football graph from the Spain vs Holland 2010 WC Final.
Kirchoff ’s and Ohm’s laws. This connection between graphs and electrical networks is highly useful
not only for graph theory and electrical networks but also used in random walks and algebraic graph
theory. It is not entirely inaccurate to talk of Kirchoff ’s formalism also as discrete cohomology.
Example 2.2.3. (Chemical Isomers. Cayley, 1857) Atoms were represented by vertices and bonds
between atoms by an edge. Such a representation was used to understand the structure of molecules.
We shall see Cayley’s tree enumeration formula which was used in enumeration of number of chemical
isomers of a compound. See Figure 2.2.3.
Example 2.2.4. (Tour of cities. Hamilton, 1859) Given a set of cities and roads between them, find
a path that starts at one city, visits all the cities exactly once and returns to the starting city. Can we
Example 2.2.5. (Four colour theorem) Suppose we take a map of the world and assume that countries
are contiguous land masses. Let countries be vertices and edges are drawn between two countries share
a boundary. Can we colour countries such that neighbouring countries have different colours ? What
is the minimum number of colours required for the same ?
A more general question, can any graph be drawn as a map i.e., can any graph be drawn on the
plane such that edges do not cross each other ?
Already, we have seen some historic questions that shall be discussed in the course. Now, we shall see
something more specific.
Example 2.3.1 (Maximum traffic flow). Consider the traffic network (weighted graphs) and take a
starting point (source) and ending point (sink). What is the maximum amount of traffic that can flow
from source to the sink in one instance ?
The solution is very famously known as the max flow-min cut theorem and has many applications.
Example 2.3.2 (Hostel room allocation). There are n rooms and m students. Each student gives a
list of rooms acceptable to them. Can the warden allot rooms to students such that each student gets
at least one room in their list ?
Hall’s marriage theorem shall give a deceptively simple sufficient and necessary condition for the
same. Hall’s marriage theorem shall be proved using max flow-min cut theorem.
Example 2.3.3 (Hide and Seek game). Consider an area with horizontal main roads and vertical
cross roads (See the grid in Figure 2.3.3). There are safe houses at certain intersections, marked by
crosses in the figure. A robber choses to hide at one of the safe houses. A cop wants to find the main
road or the cross road in which the robber stays. What is the best strategy for the cop to succeed ?
What is the best strategy for robber to defeat the cop ? We shall exhibit a strategy for both using Hall’s
marriage theorem.
A graph G on V can be represented as a V × V matrix with 1 in the column (v, w) iff (v, w) is an
edge in G. One can represent weighted and directed graphs as well. What does rank and nullity mean
here ? Are there other matrices that encode the properties of graphs ? This viewpoint shall connect
Kirchoff’s formalism with cohomology theory.
If time permits, we shall mention random graphs. We fix [n] as the vertex set and choose edges at
random i.e., at each edge toss a coin and place the edge if it lands heads. What are the properties of
these graphs ?
Probabilistic method : Suppose we want to show that there exists graphs with a certain
property, let us show that the random graph satisfies the property with positive probability. Thus,
the set of graphs satisfying the property is non-empty. This approach was pioneered by Erdös and is
still remarkably successful in proving various results.
I shall try to emphasize various mathematical topics (such as cohomology) that show up in their
simplest incarnation in graph theory and also, many mathematical tools such as linear algebra, analysis
and probability are used to study graphs. A very readable survey on the growing importance of graph
theory is [Lovasz 2011].
Chapter 3
Example 3.1.2 (Intersection graph). Let S1 , . . . , Sn be subsets of a set S. Define G with V = [n]
and i ∼ j if Si ∩ Sj 6= ∅.
Example 3.1.3 (Delaunay graph). P ⊂ Rd , d ≥ 1 - finite distinct set of points. For y ∈ Rd , d(y, P ) :=
minx∈P |x − y|. Define Cx := {y : d(y, P ) = |x − y|}, x ∈ P . Delaunay graph is the intersection graph
on P with intersecting sets Cx , x ∈ P. Cx is called as the voronoi cell of x.
Example 3.1.4 (Euclidean Lattices). Let Br (x) be the closed ball of radius r centered at x. The
d-dimensional integer lattice is the intersection graph formed with Zd as vertex set and B1/2 (z), z ∈ Zd
as the intersecting sets. Alternatively, z ∼ z 0 if di=1 |zi − zi0 | = 1.
P
Example 3.1.5 (Cayley graphs). H, + be a group with a finite set of generators S such that S = −S
(symmetric). The Cayley graph G is defined with vertex set V = H and x ∼ y if x − y ∈ S. Since S
is symmetric, y − x ∈ S iff x − y ∈ S and so abelianness of the group does not matter.
Exercise 3.1.7. Show that Euclidean lattices are Cayley graphs. Find the generators S.
Example 3.1.9 (Complementary graph). Let G = (V, E) be a graph. The complementary graph is
Gc = (V, [V2 ] − E).
Example 3.1.10 (Line graph). Let G = (V, E) be a graph. The line graph is L(G) with vertex set
V = E and e1 ∼ e2 is they are adjacent in G.
13
14 The very basics
[5]
Example 3.1.11 (Petersen graph). The vertex set of the graph is 2 and {i, j} ∼ {k, l} if {i, j} ∩
{k, l} = ∅.
Exercise 3.1.12. Show that the Petersen graph is the complement of the line graph of K5 .
Exercise 3.1.13. Are the following three graphs isomorphic to Petersen graph ?
Exercise 3.1.14. Find the number of edges in a Line graph L(G) in terms of the number of edges in
G?
Definition 3.2.1 (Graph homomorphism and isomorphism ). Suppose G, H are two graphs. A func-
tion φ : V (G) → V (H) is said to be a graph homomorphism if x ∼ y implies φ(x) ∼ φ(y). φ is said to
be an isomorphism if φ is a bijection and x ∼ y iff φ(x) ∼ φ(y) i.e., φ, φ−1 are graph homomorphisms.
G and H are isomorphic (G ∼ = H) if there exists an isomorphism between G and H. An automorphism
is an isomorphism φ : G → G.
Exercise 3.2.2. The set of all automorphisms of G is called Aut(G). Define a binary operation on
Aut(G) as followss : For g, f ∈ Aut(G), g.f = g ◦ f i.e., the composition operation. Is Aut(G) a
group ?
Essentially, G and H are the same graphs upto re-labelling. H is a subgraph of G if V (H) ⊂ V (G)
and E(H) ⊂ E(G). H is an induced subgraph of G if H is a subgraph of G and if v, w ∈ H such that
(v, w) ∈ E(G) then (v, w) ∈ E(H).
Exercise* 3.2.3. H is an induced subgraph of G iff H is the maximal subgraph in G with vertex set
V (H).
Exercise 3.2.5. Show that there exists a homomorphism from G to K2 iff G is bi-partite.
Some basic notions 15
Exercise* 3.2.6. Can one characterize classes of graphs G which have homomorphisms to Kk ?
Exercise* 3.2.7. Denote by Hom∗ (H, G) to be the number of injective homomorphisms from H to
G. Let |V (H)| = k. Show that
6=
X
∗
|Hom (H, G)| = 1[H ⊂< {v1 , . . . , vk } >],
(v1 ,...,vk )∈V (G)k
P6=
where denotes that the sum is over distinct elements and < v1 , . . . , vk > is the induced subgraph
on the vertices v1 , . . . , vk .
Definition 3.2.8 (Some notations). Let G be the set of all finite graphs.
• Degree : dv := |Nv |.
• v is isolated if dv = 0.
Exercise* 3.2.9. Which of the graphs in Section 3.1 are regular and what are their average degrees
? Can you compute Aut(G) for these examples ?
Exercise* 3.2.10. What can you say about the Minimum degree, maximum degree and average degree
of Complementary and Line graphs given those of the original graph ?
16 The very basics
P
Lemma 3.2.11. v dv is even. Thus the number of odd-degree vertices is even and d(G) = 2(G).
Exercise 3.2.12. Suppose G is a 3-regular graph on 10 vertices such that any two non-adjacent
vertices have exactly one common neighbour. Is G isomorphic to Petersen graph ?
Definition 3.2.13. (Path, Walk and Cycle) An ordered set of vertices P = v0 . . . vk is said to be a
walk from v0 to vk if vi ∼ vi+1 . A walk P = v0 . . . vk is said to be a path from v0 to vk if (vi , vi+1 )
are distinct for all i. A walk is closed if vk = v0 . A path is simple (also called as self-avoiding walk)
if vi 6= vj for all i 6= j. A walk or path is said to be closed if vk = v0 and else open. A closed path is
also called as a circuit. A circuit C = v0 . . . vk−1 v0 with no repetition of intermediate vertices is called
a cycle i.e., v0 , . . . , vk−1 are distinct.
Exercise 3.2.14. For v 6= w, show that there exists a walk from v to w iff there exists a path from v
to w iff there exists a self-avoiding walk from v to w.
Exercise* 3.2.15. Show that → partitions V into equivalence classes and the equivalence class of v
is Cv . Show that Cv is the maximal connected subgraph containing v.
The equivalence classes induced by → are called as connected components and the number of
connected components are denoted by β0 (G).
Exercise 3.2.17. Show that δ(G), ∆(G), d(G), β0 (G) are all graph invariants.
Exercise 3.2.18. Let G be a simple graph with at least two vertices. Show that G must contain at
least two vertices with the same degree.
Set dG (v, v) = 0 for all v ∈ V . Show that dG (v, w) = dG (w, v) and further dG (v, w) = 0 implies that
v = w.
Graphs and matrices : A little peek 17
Even if G is not connected, we have that dG satisfies the three axioms of a metric space. Further,
define the diameter of a graph as
Given a vertex v and r > 0, set Br (v) := {w : dG (v, w) ≤ r}, the ball of radius r at v. For
un-weighted graphs show that |Bn (v)| ≤ ni=0 ∆(G)i .
P
Exercise* 3.3.2. Let G be the Cayley graph of a free group generated by a finite symmetric set of
generators S = {s1 , −s1 , . . . , sn , −sn }. Compute |Bn (e)| for all n.
For (un-weighted) graphs, let Πn (v) be the set of self-avoiding walks of length exactly n from v. it
holds that |Πn (v)| ≤ ∆(G)(∆(G)−1)n where n ∈ N. Thus for Ld , we have that |Πn (O))| ≤ 2d(2d−1)n .
Exercise* 3.3.4. Is it true that there exists a κd such that as n → ∞, we have that |Πn (O)|1/n →
κd ∈ [0, ∞) ? Hint : Use that log |Πn (O)| is sub-additive.
Definition 3.4.1 (Adjacency matrix. ). Let G be a graph on n vertices. For simplicity, assume
V = [n]. The adjacency matrix A := A(G) := (A(i, j))1≤i,j≤n of a simple finite graph is defined as
follows : A(i, j) = 1[i ∼ j]. The definition can be appropriately extended for multi-graphs.
Lemma 3.4.2. Let G be a graph on n vertices and A be its adjaceny matrix. Show that Al (i, j) is the
number of walks of length l from i to j.
Exercise* 3.4.3. Let λ1 , . . . , λn be the eigenvalues of A. Show that the number of closed walks of
length l in G is ni=1 λli . Here, we count the walk v1 , . . . , vl−1 , v1 and v2 , . . . , vl−1 , v1 , v2 as distinct
P
walks.
Exercise* 3.4.4. Count the number of closed walks of length l in the complete graph Kn .
18 The very basics
Lemma 3.4.5. Let G be a connected graph on vertex set [n]. If d(i, j) = m, then I, A, . . . , Am are
linearly independent.
Proof. Assume i 6= j. Since there is no path from i to j of length less than m, Ak (i, j) = 0 for all k < m
and Am (i, j) > 0. Thus, if I, A, . . . , Am are linearly dependent with coefficients c0 , . . . , cm , then by
the above observation and positivity of entries of Ak for all k, we have that cm = 0 i.e.,I, A, . . . , Am−1
are linearly dependent. Since d(i, j) = m, there exists j 0 such that d(i, j 0 ) = m − 1. Now applying the
above argument recursively, we get that c0 = c1 = . . . = cm = 0.
Corollary 3.4.6. Let G be a connected graph with k distinct eigenvalues. Then k > diam(G).
Proof. Let d = diam(G). Recall that the minimal polynomial of a matrix A is the monic polynomial
Q of least degree such that Q(A) = 0. By Lemma 3.4.5, we have that deg(Q) > d. The proof is
complete by observing that the number of distinct eigenvalues of A is at least deg(Q).
Exercise* 3.5.2. Show that a finite graph is Eulerian iff G is connected and is a edge-disjoint union
of cycles i.e., G = C1 ∪ . . . Cm where Ci ’s are cycles and have no common edges.
Theorem 3.5.3 (Euler’s theorem ; Veblen 1912.). A finite connected graph is Eulerian iff every vertex
has even degree.
Proof. Since the graph is Eulerian, let C1 , . . . , Cm be the partition into edge-disjoint cycles. Viewing
Ci ’s as graphs by themselves, observe that dv (Ci ) = 21[v ∈ Ci ] i.e., vertices in Ci have degree 2 and 0
otherwise. Further, we have by edge-disjointness of Ci ’s that
X X
dv (G) = dv (Ci ) = 2 1[v ∈ Ci ],
i i
Exercise* 3.5.4 (Konigsberg problem). Prove Euler’s theorem for multi-graphs and hence show that
Konigsberg problem has no solution.
Exercise* 3.5.5. Every closed odd length walk contains an odd length cycle.
Theorem 3.5.6 (König’s theorem). A graph is bi-partite iff it has no odd cycles.
The König here is the hungarian mathematician Dénes König whose father Gyola König was also
a well-known mathematician. Dénes wrote the first book on graph theory.
a constant. To determine exact asymptotics of f2 (r) is known as the Gauss circle problem.
Here are two questions from geometric group theory. For more on this fascinating subject, refer
to [Clay and Margalit 2017].
Question 3.6.2. Suppose G is the Cayley graph of a finitely generated countable group such that
n−d |Bn (e)| → ∞ for all d ∈ (0, ∞). Is it true that n−a log |Bn (e)| → ∞ for some a > 0 ?
Question 3.6.3. Can you construct a finitely generated group such that its Cayley graph has the
following growth property for some a < 0.7 ?
0 < lim inf n−a log |Bn (e)| ≤ lim sup n−a log |Bn (e)| < ∞.
n→∞ n→∞
Note : If you have solutions for any of the above four questions, please consider
p √
submitting them here. It was recently shown by Duminil-Copin and Smirnov that κ = 2 + 2
for the hexagonal lattice (see Figure 3.6). In listing problems in IMO which can lead to research
problems, Stanislav Smirnov mentions this ([Smirnov 2011]).
Spanning Trees
Spanning trees (and minimal spanning trees) are a central object in combinatorial optimization, graph
theory and probability. See the end of the chapter for some probabilistic connections.
1. G is a forest.
Exercise 4.1.3. Prove that there exists two vertices of degree 1 in a tree.
Theorem 4.1.4 (Cayley’s formula ). The number of labelled trees on n vertices (i.e., spanning trees
on Kn ) is nn−2
Proof via Prufer code (H. Prufer (1918)) . We shall construct a bijection
Given a tree T on [n], we generate a sequence of trees T1 , . . . , Tn−1 inductively as follows : Set T1 = T .
Given tree Ti on n − i + 1 vertices, let xi be the least labelled vertex of degree 1 and delete the edge
incident on xi to obtain Ti+1 . Denote by yi , the neighbour of xi . Observe that Tn−1 is K2 and the
process terminates at Tn when the tree has only one vertex. Now define P as
P (T ) = (y1 , . . . , yn−2 ).
1
the following are equivalent
21
22 Spanning Trees
Clearly P is a map from S(Kn ) to [n]n−2 . We shall prove that it is a bijection by constructing P −1
using induction. For n = 2, P is trivially a bijection.
Some observations : (i) (y2 , . . . , yn−2 ) is the Prufer code of T2 . (ii) The degree
di := n−2
P
j=1 1[yj = i] + 1 (Try to prove this yourself by induction on n and then see the proof below).
(iii) As a consequence we have that
xk := min{i : i ∈
/ {x1 , . . . , xk−1 , yk , . . . , yn−2 }} (again prove by induction on k using (ii)).
Suppose P is a bijection for n − 1 and let a = (a1 , . . . , an−2 ) ∈ [n]n−2 . Now, let x = min{i : i ∈
/
(a1 , . . . , an−2 )}. Consider a0 = (a2 , . . . , an−2 ) and by induction assumption a0 is the unique Prufer
code of a tree T 0 on [n] \ {x} since P is a bijection on [n] \ {x} by induction assumption. Define
T := T 0 ∪ {x, a1 }. Clearly, T is a tree as x ∈
/ V (T 0 ) and P (T ) = a. If T 00 is a tree such that P (T 00 ) = a,
then by the property (ii) of Prufer code, [n] \ {a1 , . . . , an−2 } is nothing but vertices of degree 1. By
definition of x, it has the least label among such vertices and thus by construction (x1 , y1 ) = (x, a1 ).
Hence, P (T200 ) = a0 . By uniqueness of T 0 , we have that T200 = T 0 and hence T 00 = T. So P −1 is well-
defined on [n]n−2 and P is 1 − 1 and onto. Thus P is a bijection.
Trivially (ii) holds for n = 2. Assume that it holds for all trees on n − 1 vertices. Let T be a tree
Pn−2
on [n]. By (i) and induction hypothesis, for i 6= x1 , we have that di (T2 ) = j=2 1[yj = i] + 1. Since
di (T2 ) = di (T ) for all i 6= x1 , y1 and dx1 (T ) = 1, dy1 (T ) = dy1 (T2 ) + 1, (ii) holds for T as well.
Theorem 4.1.5 (Tree counting theorem ). The number of labelled spanning trees on n vertices with
n−2
= Qn(n−2)!
degree sequence d1 , . . . , dn is d1 −1,...d n −1 (di −1)!
for n ≥ 3.
i=1
Show that Cayley’s theorem follows as a corollary of the tree counting theorem.
Proof. Denote by t(n; d1 , . . . , dn ) the number of trees with degree sequence d1 , . . . , dn . The theorem
holds for n = 3. Assume that it is true for n − 1. Assume that dj = 1.
Suppose j is joined to i in a tree T , then T − (i, j) is a tree on [n] − j with degree sequence
d1 , . . . , di − 1, . . . , dn . Thus we get that
n
X
t(n; d1 , . . . , dn ) = t(n − 1; d1 , . . . , di − 1, dˆj , . . . , dn ),
i=1,i6=j
where dˆj means dj is absent. Note that it is not possible that di = 1 and for di = 1, t(n−1; d1 , . . . , di −
n−2
1, . . . , dn ) = 0. Now check inductively that t(n; d1 , . . . , dn ) = d1 −1,...d n −1 .
Exercise* 4.1.6. Give an alternative proof of tree counting theorem using Prufer codes.
Remark 4.1.7 (More proofs of tree counting theorem). As with any such a fundamental result, there
are multiple proofs.
1. We shall see a more powerful tree counting argument (known as Kirchoff ’s matrix tree theorem)
using matrix theory later.
Trees and Cayley’s theorem 23
2. There is another proof via bijection due to Joyal and a recent double counting argument due to
Pitman.
All these five proofs can be found in [Aigner et al. 2010, Chapter 30].
Exercise* 4.1.8. Show that the following are equivalent. Let G be a graph with k components.
1. T ⊂ G is a spanning forest 2 i.e., a forest such that every component of the forest is a spanning
tree of the corresponding compoent.
Exercise 4.1.9. Let H ⊂ G be a subgraph and e ∈ G \ H. Then exactly one of the following holds :
(1) β0 (H ∪ e) = β0 (H) − 1 or (2) β0 (H ∪ e) = β0 (H) and there exists a cycle C ⊂ H ∪ e such that
e ∈ C and C \ e is not a cycle in H.
1. G is a forest
2. There exists a unique simple path between u to v for all u 6= v which are in the same component.
Proof. Since forest has no cycles, (i) ⇒ (iii) follows from the above exercise. Since every edge is a
cut-edge, this implies that there are no cycles and hence G is a forest i.e., (iii) ⇒ (i). If there exists
two distinct simple paths from u to v for u 6= v in the same component, then the union of the paths
contains a cycle (Prove this as an exercise !). This contradicts (i) and (iii). Thus, (i) and (iii) both
imply (ii). Assuming (ii), it is easy to see that there is no cycle in G and hence both (i) and (ii)
hold.
Lemma 4.1.13 (Insertion property of Spanning Trees). Suppose T is a spanning tree of G and there
exists e ∈ G − T . Then ∃e0 ∈ T such that T + e − e0 is also a spanning tree.
2
This terminology is a slight abuse of our usual usage of the term ‘spanning’.
24 Spanning Trees
Proof. For any e0 ∈ T , T + e − e0 has same number of edges as T . Hence it suffices to show that there
exists e0 such that T + e − e0 does not have a cycle. This can be seen easily as follows : Since T is
a spanning tree, T + e contains a cycle. Now remove any edge e0 in the cycle and we can see that
T + e − e0 does not contain a cycle.
Exercise* 4.1.14 (Deletion property of Spanning Trees). Suppose T, T 0 are spanning trees of G and
e ∈ T − T 0 . Then ∃e0 ∈ T 0 such that T − e + e0 is also a spanning tree.
Of course, minimal spanning tree exists if G is a connected graph. A set of edges S ⊂ E is said to
be a cut3 if β0 (G − S) = β0 (G) + 1 and for any S 0 ( S, β0 (G − S 0 ) = β0 (G).
Proposition 4.2.2 (Some properties of MST). Let G be a connected graph with edge-weights.
2. Cut property : If M is a MST and C is a cut in G, then one of the minimal weight edges in C
should be in the M .
3. Cycle property : If M is a MST and C is a cycle in G, then one of the maximal weight edges
in C will not be in M .
Proof. (i) : Let T1 , T2 be MSTs such that T1 6= T2 . Since T1 , T2 have the same vertex set, there
exists an edge in T1 ∆T2 . Choose the edge e1 with the least weight and WLOG let e1 ∈ T1 . Since
T2 is a spanning tree, T2 + e1 has a cycle C. Since C ( T1 , there exists e2 ∈ C − T1 and also
e2 ∈ T1 ∆T2 . Thus w(e2 ) > w(e1 ) as e1 has the least weight in T1 ∆T2 . As in the proof of insertion
property for spanning trees (Lemma 4.1.13), we can show that T2 + e1 − e2 is a tree. Thus, we have
that w(T2 + e1 − e2 ) < w(T2 ) and hence contradicting the minimality of T2 .
Step 2: Select one of the smallest (in terms of weight) edges in E − D. Call it e.
Step 5: Output M .
Theorem 4.3.1. The output of the Kruskal’s algorithm is a minimal spanning tree if G is connected.
Proof. The proof consists of two steps - Firtly, we will show that the output M is a spanning tree and
then show it is of minimum weight.
Step 1 : M is a spanning tree. Clearly M does not contain a cycle. If M has at least two
components C1 , C2 , then let e an edge with minimal weight between C1 and C2 . e exists as G is con-
nected. e would have been added in Step 2 when it was considered. Thus, we derive a contradiction
to the fact that M is disconnected. Hence M is connected and acyclic, i.e., a spanning tree.
Step 2 : M has minimum weight. We shall show that the following claim holds by induction
: At every stage of the algorithm, there is a minimal spanning tree T such that M ⊂ T .
The above claim suffices because at the last step, the output M is the M at the end of Step 3 and if
it is a subgraph of a spanning tree T , then it must be that M = T as M is a spanning tree from Step 1.
Induction argument : The claim holds in the beginning because M = ∅. Assume that the claim
holds at some stage of the algorithm for a M and a T .
Now, suppose that the next chosen edge creates a cycle, then M does not change and the claim
still holds.
So, let the next chosen edge not create a cycle and hence M becomes M + e. If e ∈ T , we are
done. Suppose e ∈
/ T . Then T + e has a cycle C. This cycle contains edges which do not belong to M ,
since e does not form a cycle in M but does in T . Note that, there exists f ∈ C − M which has not
been considered by the algorithms before and hence it must have weight at least as large as e. If f
had been considered before e in the algorithm, M − e + f will not contain a cycle (as M − e + f ⊂ T )
and so f would have been added to M already.
Then T − f + e is a tree by the insertion property and it has the same or less weight as T . If
w(f ) = w(e), then T − f + e is a minimum spanning tree containing M + e and again the claim holds.
If w(f ) > w(e), then T − f + e is a spanning tree of strictly smaller weight than T and leads to a
contradiction.
26 Spanning Trees
Step 2: Select one of the smallest (in terms of weight) edges in E ∩ (S × T ). Call it e and M = M ∪ e.
Step 5: Output M .
Proof. Denote the output by M̃ . The proof of the fact that M̃ is a spanning tree is left as an exercise
as the steps were sketched in the class. W e shall only show minimality here.
Claim : As with the proof of Kruskal’s algorithm, we shall show that at every step M ⊂ M ∗ with
the latter being a MST.
The claim is trivially true at the initial step when M = ∅ and since MST exists. Suppose the claim
is true upto some step with M, T, S already determined i.e., M ⊂ M ∗ , a MST.
/ M ∗ . Then,
Let e = (u, v) be the edge chosen by the Prim’s algorithm at this stage and suppose e ∈
there exists a path P from u to v in M ∗ . This implies that there exists an edge e0 ∈ P ∩ (T × S). But
since Prim’s algorithm selected e instead of e0 , we have that w(e) ≤ w(e0 ). But since P + e is a cycle in
M ∗ + e, we can remove any edge from the cycle and still get a spanning tree. Thus, M ∗ + e − e0 is also
a spanning tree and since M ∗ is a MST, we have that w(e) ≥ w(e0 ). Thus, we get that w(e) = w(e0 )
and M + e ⊂ M ∗ + e − e0 , a MST. This proves the claim and completes the proof.
Exercise* 4.3.3. The multi-set of weights for a minimal spanning tree is unique i.e., for any s ∈ R
and any two MSTs M, M 0 , we have that |{e ∈ M : w(e) = s}| = |{e ∈ M 0 : w(e) = s}|.
Exercise 4.3.4 (Minimal Spanning Forests). Define and suitably extend these notions to minimal
spanning forests.
Exercise* 4.3.5. Given a connected graph (G, E) with non-negative edge weights w satisfying the
following conditions : w(u, v) = ∞ if (u, v) ∈
/ E and w(u, v) > 0 if u ∼ v. Fixing a starting vertex x,
show that the following algorithm computes the shortest paths from x to all other vertices in G.
Step 3 : Continue Step 2 until S = V (G) and set d(u, x) = t(u) for all u.
The extremal refers to the fact that one is concerned with extremal (maximal or minimal) questions
about graphs. A prototypical question is the maximal or minimal number of edges in a graph with
a certain property. We have already answered such a question about maximal number of edges for
a graph to be a tree. Turan’s theorem, which we shall shortly, is considered one of the foundational
results in this subject. It is in answering a question in extremal graph theory, Erdös made powerful use
of probabilistic ideas and this gave birth to what is now famously known as the probabilistic method.
We shall see an illustration of this in the second section.
Proof. WLOG assume δ(G) ≥ 1. Let v0 be a vertex in G. Given vi , i < δ(G), we can choose a
neighbour vi+1 of vi such that vi+1 6= v0 , . . . , vi−1 as vi has at least δ(G) neighbours. Hence we get a
self-avoiding walk v0 v1 . . . vδ(G) as needed.
Define girth of a graph g(G) := min{l(C) : C, a cycle}. Set g(G) = ∞ if there exists no cycle. The
diameter of a graph if diam(G) := max{d(u, v) : u, v ∈ V } and
Proof. Assume otherwise. Let C = v0 . . . vk v0 be the cycle of minimal length. Suppose k is even i.e.,
l(C) = k + 1 is odd. Then d(v0 , vk/2 ) = k/2. But by definition of diameter, k/2 ≤ diam(G) and so
l(C) = k + 1 ≤ 2diam(G) + 1. If k is odd then d(v0 , v(k+1)/2 ) = k + 1/2 and again we have that
l(C) = k + 1 ≤ 2diam(G)
Lemma 5.1.3 (Mantel’s theorem, 1907). If G is a graph on n vertices with no triangle then |E| ≤
bn2 /4c. Equivalently, if |E| > n2 /4, then g(G) = 3.
29
30 Extremal Graph Theory
P
Proof. Let G have [n] has a vertex set and no triangles. Let zi be weight of i such that i zi = 1 and we
P P P
shall try to maximize S = i∼j zi zj . Let k l. Assume that j∼k zj zk = xzk and j∼l zj zl = yzl .
WLOG assume x ≥ y. Observe that zk x + zl y ≤ (zk + )x + (zl − )y. Thus if (z1 , . . . , zn ) is a
configuration of weights, then (z1 , . . . , zk + zl , . . . , 0, . . . , zn ) is a configuration with larger S i.e., we
have transferred weight from zl to zk . If we repeat the procedure again, we see that it stops when
the weights are concentrated on two adjacent vertices. This is because whenever the weights are
concentrated on at least three vertices, there are two vertices which are not neighbours as the induced
subgraph is not complete. If the two adjacent vertices are i, j then S = zi zj ≤ 1/4.
Let zi = n−1 for all i ∈ [n]. Then the corresponding S = n−2 |E| and this is at most 1/4 by the
above argument. Hence the theorem is proved.
Theorem 5.1.4 (Turan, 1941). If a simple graph on n vertices has no complete subgraph Kp , then
(p−2)n2 −r(p−1−r)
|E| ≤ M (n, p) := 2(p−1) where r ≡ n( mod p − 1).
The above bound can be achieved as follows : Let S1 , . . . , Sp−1 be an almost equal partition of V
i.e., S1 , . . . , Sr are subsets of sizze t + 1 and the rest are of size t for some t ≥ 0 and r ≥ 1. Construct
the complete multi-partite graph on S1 , . . . , Sp−1 such that all the edges between Si and Sj are present
for i 6= j and these are the only edges. This does not have a complete subgraph Kp and the number
of edges is M (n, p).
and one can easily verify that the RHS is equal to M (n, p).
See [Aigner et al. 2010, Chapter 36] for more proofs of Turan’s theorem.
Exercise* 5.1.5. If you generalize the argument for Mantel’s theorem given in the class, what is the
bound you get in Turan’s theorem ?
1
√
Theorem 5.1.6. If a graph G on n vertices has more than 2n n − 1 edges, then G has girth ≤ 4.
That is G contains a triangle or quadrilateral.
Existence of complete subgraphs and Hamiltonian circuits 31
Proof. Suppose g(G) ≥ 5. Let v1 , . . . , vd be the neighbours of a vertex v. Since there are no triangles,
vj ∈/ Nvi for i 6= j and since there are no quadrilaterals, Nvi ∩ Nvj = {v} for i 6= j. Thus, we have that
∪i=1 Nvi \ {v} ∪ Nv ∪ {v} ⊂ [n] and so di=1 (dvi − 1) + d + 1 ≤ n and hence w∼v dw ≤ n − 1. Thus,
d
P P
we et
XX X X
n(n − 1) ≥ dw = d2v ≥ n−1 ( dv )2 = n−1 4|E|2 .
v w∼v v v
Theorem 5.1.7. If G is a graph on n vertices with n ≥ 3 and δ(G) ≥ n/2, then G contains an
Hamilton cycle.
Recall that a Hamilton circuit is a simple closed path passing through every vertex exactly once.
Proof. Suppose G is a counterexample to the theorem and G be such a graph with maximal number
of edges i.e., addition of an edge to G creates a cycle. Let v w and hence G ∪ (v, w) will contain
a Hamilton cycle v = v1 v2 . . . vn = w, v. Thus v1 v2 . . . vn is a simple path. Define sets Sv := {i : v ∼
vi+1 } and Sw := {i : w ∼ vi }. Since δ(G) ≥ n/2, |Sv |, |Sw | ≥ n/2 and further Sv , Sw ⊂ {1, . . . , n − 1}.
Hence Sv ∩ Sw 6= ∅ and assume that i0 ∈ Sv ∩ Sw . Then v = v1 v2 . . . vi0 w = vn vn−1 . . . vi0 +1 v1 = v is
a Hamiltonian circuit in G, contradicting our assumption.
Exercise* 5.1.8. 1. If δ(G) ≥ 2, every graph has a cycle of length at least δ(G) + 1.
2. Prove that a graph G with g(G) = 4 and δ(G) ≥ k has at least 2k vertices. Is 2k the best lower
bound ?
3. Let G be a graph with g(G) = 5 and δ(G) ≥ k. Show that G has at least k 2 + 1 vertices.
5. Show that Petersen graph (defined in Section 3.1) is the largest 3-regular graph with diameter 2.
6. Let G be a simple graph on n vertices (n > 3) with no vertex of degree n − 1. Suppose that
for any two vertices of G, there is a unique vertex joined to both of them. If x and y are not
adjacent show that d(x) = d(y). Now, show that G is a regular graph.
7. Show that a simple graph on n vertices with bn2 /4c edges and no triangles, then it is the complete
bipartite graph Kk,k if n = 2k or Kk,k+1 if n = 2k + 1.
e
8. If a simple graph on n vertices has e edges, then it has at least 3n (4e − n2 ) triangles.
9. Suppose G is a graph with at least one edge. Then there exists a subgraph H such that δ(H) >
(H) ≥ (G) where recall that (G) was defined as the edge density of the graph (see Definition
3.2.8).
32 Extremal Graph Theory
Note that R(m, n) = R(n, m), R(m, 2) = m. Our previous lemma gives that R(3, 3) ≤ 6. Ram-
sey(1929) showed that R(m, n) < ∞ for all m, n. Using probabilistic method, Erdös showed that
R(k, k) > b2k/2 c. We will present the proof now.
Let n = b2k/2 c and we wish to show that there is a coloring of edges of Kn with blue and red
such that there is no monochromatic clique of size k. The set of all possible colorings on Kn is
n
Ωn := E(Kn ){R,B} i.e., a set of cardinality 2( 2 ) . The crucial idea in probabilistic method is to pick a
random variable X with values in Ωn such that the probability X is a coloring with no monochromatic
k-clique is positive and this trivially implies that the set of colorings with no monochromatic k-clique
is non-empty as desired. As it is common, we can view f ∈ Ωn as a function f : E(Kn ) → {R, B}.
Let An := {f ∈ Ωn : f has no monochromatic k-clique}. Instead of showing P (X ∈ An ) > 0,
we will show that P (X ∈ Acn ) < 1. We shall shortly see why the required upper bounding is easier.
Firstly, note that if f ∈
/ An , then there exists a subset S ⊂ [n] and |S| = k such that fES ≡ R or
fES ≡ B where we have used ES to abbreviate the edge set of < S >, the complete subgraph on S.
Hence, we have that
[
Acn ⊂ ({f : fES ≡ R} ∪ {f : fES ≡ B}).
S⊂[n],|S|=k
The union bound is what makes it easier to upper bound than lower bound. Also, so far we have
not used any specific property of the random variable at all. We can now make a ‘clever’ choice of
the random variable to obtain the desired upper bound. This flexibility in making ‘clever’ choices of
random variables to obtain desired bounds is another hallmark of the probabilistic method and lends
enormous power to the method.
Now choose X ∈ Ωn as follows : X(e), e ∈ E(Kn ) are i.i.d. random variables with probability such
n
that P (X(e) = R) = 1/2 = P (X(e) − B). Equivalently, P (X = f ) = 2−( 2 ) for all f ∈ Ω i.e., X is
n
uniformly distributed in Ωn . Now trivially, we have that
k
P (XES ≡ R) = P (XES ≡ B) = 2−(2)
***Some other methods*** 33
and thus
n 1−(k)
P (X ∈
/ An ) ≤ 2 2 .
k
k
n
21−(2) < 1 as desired.
I will leave it as an exercise to verify that for the choice of n we have made k
This proves R(k, k) > n := b2k/2 c.
The defect of probabilistic method as obvious in the above proof is that it is non-constructive i.e.,
it did not give explicitly a coloring that has a monochromatic k-clique in Kn .
Read [Alon and Spencer 2004, Chapter 1] and [Aigner et al. 2010, Chapter 40] for more illustrative
examples on the probabilistic method and if interested, read more of [Alon and Spencer 2004].
Regardless of your liking of the proof via probabilistic method, Ramsey numbers raise many in-
teresting questions themselves. For example, R(5, 5) is still unknown. Here is a legendary statement
from Erdös attesting to the complexity of Ramsey numbers (see [Spencer 1994, Page 4]) :
” Erdös asks us to imagine an alien force, vastly more powerful than us, landing on Earth and
demanding the value of R(5, 5) or they will destroy our planet. In that case, he claims, we should
marshal all our computers and all our mathematicians and attempt to find the value. But suppose,
instead, that they ask for R(6, 6). In that case, he believes, we should attempt to destroy the aliens. ”
Alternatively one can consider a matching of a graph M as a subgraph of G such that dM (v) = 1
for all v ∈ V (M ). A matching is complete if M is spanning. A vertex v is said to be saturated if
v ∈ M and else unsaturated. For a subset S ⊂ V , N (S) = ∪v∈S N (v).
Theorem 6.1.2 (Hall’s marriage theorem ; Hall, 1935). Let G be a bi-partite graph with the two
vertex sets being V1 , V2 . Then there exists a complete matching on V1 iff |N (S)| ≥ |S| for all S ⊂ V1 .
We will give a proof via induction now and a later exercise will involve proof using max-flow
min-cut theorem. See [Diestel 2000, Section 2.1] for two more proofs.
Proof. Let |V1 | = k and our proof will be by induction on k. If k = 1, the proof is trivial.
Let G = V1 ∪ V2 be such that the result holds for any graph with strictly smaller V1 .
Suppose that |N (S)| ≥ |S| + 1 for all S ( V1 . Then choose (v, w) ∈ E ∩ V1 × V2 and consider
the induced subgraph G0 :=< V − {v, w} >. Since we have removed only w from V2 and that
|N (S)| ≥ |S| + 1 for all S ( V1 , we get that |N (S 0 )| ≥ |S 0 | for all S 0 ⊂ V1 − {v}. Thus there is a
complete matching M on V1 −{v} in G0 by induction hypothesis and M ∪(v, w) is a complete matching
on V1 in G as desired.
If the above is not true i.e., there exists A ⊂ V1 such that N (A) = B and |A| = |B|. Then, by
induction hypothesis, there is a complete matching M0 on A in the induced subgraph < A ∪ B >.
Trivially, Hall’s condition holds i.e., for all S ⊂ A, |N (S)∩B| = |N (S)| ≥ |S|. Let G0 := G− < A∪B >.
Let S ⊂ V1 − A. Suppose if |N 0 (S)| < |S| where N 0 (S) = N (S) ∩ (V2 − B). Then, we have that
35
36 Matchings, covers and factors.
N (S ∪ A) = N 0 (S) ∪ B and hence |N (S ∪ A)| ≤ |N 0 (S)| + |B| < |S| + |A|, a contradiction. Hence, G0
also satisfies Hall’s condition and again by induction hypothesis G0 has a complete matching M 0 on
V1 − A. Thus, we have a complete matching M := M0 ∪ M 0 on V1 in G.
Exercise 6.1.3. For k > 0, a k-regular bipartite graph has a perfect matching.
Proposition 6.1.4. Let d ≥ 1. Let G be a bipartite graph on V1 t V2 such that |N (S)| ≥ |S| − d for
all S ⊂ V1 . Then G has a matching with at least |V1 | − d independent edges.
Proof. Set V20 := V2 ∪ [d]. Define G0 with vertex set as V1 t V20 and edge set as E(G) ∪ (V1 × [d]). Then,
it is easy to see that Hall’s condition is true on G0 and hence there is a complete matching M of V1 in
G0 . Now, if we remove the edes in M incident on [d], we get a matching with at least |V1 | − d edges
as required.
Exercise* 6.1.5. Let G be a bi-partite graph with partitions V1 = {x1 , . . . , xm } and V2 = {y1 , . . . , yn }.
Then G has a subgraph H such that dH (xi ) = di , dH (yj ) ≤ 1, ∀1 ≤ i ≤ m and ∀1 ≤ j ≤ n iff for all
P
S ⊂ V1 , we have that |N (S)| ≥ xi ∈S di .
Definition 6.1.6 (Independent sets and covers). An independent set of vertices is S ⊂ V such that
no two vertices in S are adjacent. A subset of vertices S ⊂ V is a vertex cover if every edge in G is
incident to at least one vertex in S. An edge cover is a set of edges E 0 ⊂ E such that every vertex is
contained in at least one edge in E 0 .
For the relevance of independent sets to information theory/communication, see Section 6.4.
Exercise 6.1.8. Prove that a graph G is bipartite if and only if every subgraph H of G has an
independent set consisting of at least half of V(H).
We first derive some trivial relations between the four quantities. If M is a maximal matching,
then to cover each edge we need distinct vertices and hence the vertex cover should have size at least
M . This yields the first inequality below.
As for the second inequality, observe that to cover vertices of an independent set, we need distinct
edges.
Hall’s marriage theorem, Koenig’s, Gallai’s and Berge’s theorems 37
Lemma 6.1.9. Let G be a graph. S ⊂ V is an independent set iff S c is a vertex cover. As a corollary,
we get α(G) + β(G) = n = |V |.
Theorem 6.1.10 (Konig, Egervary, 1931). For a bi-partite graph, α0 (G) = β(G).
Remark 6.1.11 (Equivalent Formulation :). A bi-partite graph is equivalent to a 0−1 matrix. Denote
the rows by V1 and columns by V2 . If (v1 , v2 )-entry is a 1 then there is an edge v1 ∼ v2 . It is easy to
see that given a bi-partite matrix it can be represented as a 0 − 1 matrix with rows indexed by V1 and
columns indexed by V2 .
By a line, we shall mean either a row or a column. Under the matrix formulation, vertex cover is
a set of lines that include all the 1’s. An independent set of edges is a collections of 1’s such that no
two 1’s are on the same line.
Proof. We will show that for a minimal vertex cover Q, there exists a matching of size at least |Q|.
Partition Q into A := Q ∩ V1 and B := Q ∩ V2 . Let H and H 0 be induced subgraphs on A t (V2 − B)
and (V1 − A) t B respectively. If we show that there is a complete matching on A in H and a complete
matching on B in H 0 , we have a matching of size at least |A| + |B| (= |Q|) in G. Also, note that it
suffices to show that there is a complete matching on A in H because we can reverse the roles of A
and B apply the same argument to B as well.
Since A ∪ B is a vertex cover, there cannot be an edge between V1 − A and V2 − B. Suppose for
some S ⊂ A, we have that |NH (S)| < |S|. Since NH (S) covers all edges from S that are not incident
on B, Q0 := Q − S + NH (S) is also a vertex cover. By choice of S, Q0 is a smaller vertex cover than Q
contradiciting the minimality of Q. Hence, we have that Hall’s condition holds true for A in H. And
by the arguments in the previous paragraph, the proof is complete.
Exercise 6.1.12. If a graph G does not contain a path of length more than 2, show that it’s connected
components are all star graphs.
Theorem 6.1.13 (Gallai, 1959). If G is a graph without isolated vertices, then α0 (G) + β 0 (G) = n =
|V |.
Let Q be a minimal edge cover. Then Q cannot contain a path of length more than 2. Else, by
removing the middle edge in a path of length at least 3, we can obtain a smaller edge cover. By the
38 Matchings, covers and factors.
k
X k
X
α0 (G) + β 0 (G) ≥ |M | + |Q| = k + |E(Ci )| = |V (Ci )| = n.
i=1 i=1
As a corollary, we get König’s result : if G is bi-partite graph without isolated vertices, α(G) =
β 0 (G).
Definition 6.1.14 (Augmenting path ). Given a matching M , a M -alternating path P is a path such
that its edges alternate between M and M c . A M -augmenting path is a M -alternating path whose
end-vertices do not belong to M .
Theorem 6.1.15 (Berge, 1957 ). A matching M in a graph is a maximum matching in G iff G has
no M -augmenting path.
Theorem 6.2.2 (Petersen, 1891). Every regular graph of positive even degree has a 2-factor.
Proof. Let G be any 2k-regular graph for k ≥ 1. WLOG, let G be connected. Let v1 v2 . . . vn v0 be the
Eulerian circuit in G. Now, we replace each vertex v by two vertices v − , v + and the edge vi vi+1 by
vi− vi+1
+
. Thus we get a new k-regular graph G0 . By Exercise 6.1.3, there is a 1-factor in G0 . Now, by
merging the vertices v − , v + in G0 , we get a 2-factor in G.
Theorem 6.2.3 (Tutte , 1947 ). A graph has a 1-factor iff o(G − S) ≤ |S| for all S ⊂ V .
Proof. (Lovasz, 1975) In a 1-factor, the odd components will have at least one vertex each to be
matched to vertices in S and these vertices in S have to be distinct. Thus, o(G − S) ≤ |S| for all
S ⊂ V if G has a 1-factor.
Now let G be a graph without 1-factor. We shall show that there is a bad set i.e., a set S violating
Tutte’s condition - o(G − S) ≤ |S| for all S ⊂ V . If G0 = G + e has no 1-factor, then so does G.
Further if G0 has a bad set S, then so does G. Hence, we shall assume that G is an edge-maximal
graph without 1-factor and we shall find a bad set in G.
Heuristic : If S is a bad set for G, then all components of G − S have to be complete. If a
component, say C, is not complete, we can consider G ∪ e where e is a missing edge in C. Since
o(G ∪ e − S) = o(G − S) > |S|, by forward implication of the theorem we have that G ∪ e is not a
1-factor i.e., violating edge-maximality of G. Further, by the same reasoning, s ∈ S must be connected
to all vertices in G − S. We will now show that these two conditions characterize bad sets and then
produce such a set.
Let us say a set S ⊂ V satisfies condition B if all components of G − S are complete and all s ∈ S
are connected to all vertices in G − S.
Claim : If S satisfies condition B, then either S is bad or ∅ is bad.
If S is not bad, then we can join one vertex from each of the odd components to S disjointly and
then try to pair up the remaining vertices. In every even component of G − S, we can pair up the
vertices and in every odd component too, we can pair up the vertices not paired to S. We are only
left with remaining vertices S 0 in S where |S 0 | = |S| − o(G − S) and S 0 forms a complete subgraph.
But since G does not contain a 1-factor, |S| − o(G − S) is odd. Since, there is a complete matching
on V − S 0 , |V − S 0 | is even and so |V | is odd i.e., ∅ is a bad set.
Now, to show that G has a set S satisfying condition B, let S be the set of vertices such that s ∈ S
is adjacent to every other vertex. If S does not satisfy condition B, then some component of G − S
isn’t complete. Let v w in a component and let v, v1 , v2 be the first three vertices on a shortest
path from v to w. The v ∼ v1 , v1 v2 but v v2 . Since v1 ∈
/ S, there is a u such that u v1 . By
maximality of G, there is a 1-factor H1 in G + (vv2 ) and H2 in G + (v1 u).
Let P = u . . . u0 be a maximal path in G starting at u with an edge from H1 and alternating
between edges in H1 and H2 . If last edge of P is in H1 , by maximality of P this implies that there is
40 Matchings, covers and factors.
no edge of H2 incident on u0 in G. This implies that u0 = b as every other vertex has an edge of H2
incident on it in G. We will then set C = P + v1 u. Note that C is a cycle and it is of even length as
P starts in H1 and ends in H1 . If the last edge of P is in H2 , again by maximality of P this implies
that there is no edge of H1 incident on u0 in G and as earlier, this means u0 ∈ {v, v2 }. Then consider
C = P + (v 0 , v1 ) + (v1 , u). Again, C is an even cycle as P starts in H1 and ends in H2 . In either case
C is an even cycle and (v1 , u) ∈ C − E. But then consider H 0 = H2 − (C ∩ H2 ) + (C − H2 ) i.e., on C
replace the edges of H2 by those of C − H2 . Now, H 0 ⊂ G. Since H2 is a 1-factor, so is H 0 and thus
we get a contradiction. Thus S is a bad set as required.
Corollary 6.2.4 (Defective version). A graph G contains a subgraph H with a 1-factor with |V (H)| ≥
|V (G)| − d iff o(G − S) ≤ |S| + d for all S ⊂ V .
Proof. Let G contain a subgraph H with a 1-factor with |V (H)| = |V (G)|−k for some k ≤ d. Consider
G0 := G ∨ Kk . Then G0 has a 1-factor by considering H along with a matching between V (G) − V (H)
and Kk . Let S ⊂ V . Then o(G0 −(S t[k])) ≤ |S|+k as G0 has a 1-factor. Further, G0 −(S t[k]) = G−S
and so o(G − S) ≤ |S| + k ≤ |S| + d.
Let o(G − S) ≤ |S| + d for all S ⊂ V and d is the minimal such number i.e., d = max{o(G −
S) − |S| : S ⊂ V }. We will assume d ≥ 1 else d = 0 and Tutte’s 1-factor theorem applies. Suppose
d = o(G − S0 ) − |S0 | for some S0 ⊂ V . Then the parity of vertices in V is same as that of an odd
multiple of d + |S0 | (= o(G − S0 )) plus S0 . The parity of the latter is same as an odd multiple of d
which is nothing but the parity of d i.e., |V | and d are both even or odd.
Let G0 := G ∨ Kd and by above argument |V 0 | is even. We will show that there exists a 1-factor in
G0 and this will imply that G contains a subgraph H with a 1-factor such that |V (H)| ≥ |V (G)| − d.
To show existence of 1-factor in G0 we will verify the Tutte’s condition.
Let S 0 ⊂ V t[d]. Suppose that [d]\S 0 6= ∅. Then G0 −S 0 is connected and hence o(G0 −S 0 ) ≤ 1 ≤ |S 0 |
unless |S 0 | = ∅. For S” = ∅, o(G0 − S 0 ) = 0 as |V 0 | is even. Suppose [d] ⊂ S 0 . Let S 0 = [d] t S for
some S ⊂ V . Hence G0 − S 0 = G − S and so o(G0 − S 0 ) = o(G − S) ≤ |S| + d = |S 0 |. Thus G0 satisfies
Tutte’s condition.
Tutte [1952] showed a necessary and sufficient condition for a graph G to have an f -factor. The
proof was by reducing it to a problem of checking for a 1-factor in a certain simple graph.
A graph construction : Given a function f and a graph G with f ≤ d, we define a graph H as
follows : Let e := d − f . To construct H replace each vertex v with a bi-partite graph Kv := Kd(v),e(v)
with vertex set A(v) t B(v). For edge (v, w) ∈ G, add an edge between a vertex of A(v) and A(w).
Theorem 6.2.6. G has an f -factor iff f ≤ d and the graph H constructed as above has a 1-factor.
Graph factors and Tutte’s theorem 41
Proof. If G has a f -factor, e(v) vertices in A(v) are unmatched. These can be matched arbitarily with
vertices of B(v), giving us a 1-factor of H.
Suppose H has a 1-factor. Remove B(v) and the vertices in A(v) matched to B(v). Now, at each
v, A(v) has f (v) vertices remaining. If we merge these f (v) vertices and call them v, we get a f -factor
of G.
Observe that we did not use the fact that G is a simple graph. Now Tutte’s 1-factor condition can
be translated to a f -factor condition. See[West 2001, Exercise 3.3.29]. An important application is
Erdös-Gallai characterization of degree sequences of graphs (see Section 6.5).
Exercise* 6.2.7. 1. A tree T has a perfect matching iff o(T − v) = 1 for all v ∈ T .
2. Show that 2α0 (G) = minS⊂V {|V (G)| − d(S)} where d(S) = o(G − S) − |S|.
3. For any k, show that there are k-regular graphs with no perfect matching.
4. If G is k-regular, has even number of vertices and remains connected when any (k − 2) edges are
deleted, then G has a 1-factor.
6. Let Qn be the hypercube graph on 0, 1n . What are α(Qn ), α0 (Qn ), β(Qn ), β 0 (Qn ) ?
7. Every 3-regular simple graph with no cut-edge decomposes (i.e., edge-disjoint union) into copies
of P4 (the 4-vertex path).
8. Let G be a k-regular bipartite graph. Prove that G can be decomposed into r-factors iff r divides
k.
10. Let T be a tree on n vertices such that α(T ) = k. Can you determine α0 (T ) in terms of n, k ?
11. Characterize the graphs G for which the following statements hold. Justify your answers.
(1) (max. independent set) α(G) = 1.
(2) (max. size of matching) α0 (G) = 1.
(3) (min. vertex cover) β(G) = 1.
(4) (min. edge cover) β 0 (G) = 1.
NOTE : In each of the above, you are required to prove a statement of the form . . . (G) = 1 iff
G is . . ..
42 Matchings, covers and factors.
Step 1 : Each ”unengaged” man ”proposes” to the highest women who has not yet rejected him.
Step 2 : Each woman agrees to get ”engaged” to the ”highest proposer” in her list. If previously engaged
and current proposer is ranked hgiher, she rejects her the previous engagement. The other
proposers are also rejected.
See [West 2001, Theorem 3.2.18] for the formal theorem statement and proof of the algorithm.
Theorem 6.5.1 (Erdös-Gallai Theorem, 1960 ). A list of non-negative integers (d1 , . . . , dn ) in non-
P
increasing order is graphic iff i di is even and for all 1 ≤ k ≤ n, we have that
k
X n
X
di ≤ k(k − 1) + min{di , k}.
i=1 i=k+1
We present the recent proof due to [Tripathi 2010]. One another proof using Tutte’s f -factor
theorem (Theorem 6.2.6) can be found in [West 2001, Exercise 3.3.29]. There is also a simple proof
in [Choudum 1986].
We call a graph G on vertices v1 , . . . , vn , is said to be a subrealization of a nonincreasing list
(d1 , . . . , dn ) if d(vi ) ≤ di for all 1 ≤ i ≤ n. For a subrealization, we say r is the critical index if
d(vi ) = di for 1 ≤ i < r and it is the largest such index. The sufficiency part (and the non-trivial
one) of the proof proceeds as follows : We start with a subrealization with n vertices and no edges.
Assuming d1 6= 0, we set r = 1. When r ≤ n, we do not change the degree of r but try to increase the
degree of vertex vr and reduce the deficiency dr − d(vr ).
Proof. The necessity is argued as follows : Evenness is trivial. If we look at the sum of the degrees of
the largest k vertices, then the edges are either among the k vertices or to outside. The edges among
the k vertices are counted twice and hence at most twice that of Kk i.e., k(k − 1). The edges from
1, . . . , k to i for i > k is at most min{di , k}.
Now, we prove the sufficiency as described above : Given critical index r, define S = {vr+1 , . . . , vn }.
Our construction shall give that S is an independent set at every step. This is true for r = 1.
• Case 0 : vr vi for some vi such that d(vi ) < di . Add edge (vi , vr ).
• Case 1 : vr vi for some i < r. Since d(vi ) = di ≥ dr > d(vr ), there exists u ∼ vi , u
vr , u 6= vr . If dr − d(vr ) ≥ 2, replace (u, vi ) with (u, vr ), (vi , vr ). If dr − d(vr ) = 1, then since
P
i (di − d(vi )) is even, there is an index k > r such that d(vk ) < dk . Either add edge (vr , vk ) if
the edge is not present and if not replace (u, vi ), (vr , vk ) with (u, vr ), (vi , vr ).
• Case 2: v1 , . . . , vr−1 ∈ N (vr ) and d(vk ) 6= min{r, dk } for some k > r. Since d(vk ) ≤ dk and S
is independent, so d(vk ) ≤ r. Thus, we have that d(vk ) < min{r, dk }. If vr vk , we can use Case
44 Matchings, covers and factors.
0. If not, since d(vk ) < r, there exists i < r such that vk vi . Since d(vi ) = di ≥ dr > d(vr ),
there exists u ∼ vi , u vr , u 6= vr . Remove (u, vi ) and add (u, vr ), (vi , vk ).
• Case 3: v1 , . . . , vr−1 ∈ N (vr ) and vi vj for some i < j < r : Since d(vi ) ≥ d(vj ) > d(vr ),
we have that there exists u ∼ vi , u vr , u 6= vr and w ∼ vj , u vr , w 6= vr . It is possible that
u = w. If u, w ∈ S, then delete (u, vi ), (w, vj ) and replace with (vi , vj ), (u, vr ). If u ∈
/ S or w ∈
/ S,
apply the arguments as in Case 1.
Suppose that none of the above cases apply. Since Case 1 doesn’t apply, v1 , . . . , vr−1 ∈ N (vr ) and
since Case 3 also doesn’t apply, v1 , . . . , vr are all pairwise adjacent. Case 2 also doesn’t apply and
hence d(vk ) = min{r, dk } for all k > r. Since S is independent, we have that ri=1 d(vi ) = r(r − 1) +
P
Pn Pr
i=r+1 min{r, dk } and hence i=1 (di − d(vi )) ≤ 0 by the Erdös-Gallai condition. Thus, we get that
d(vi ) = di for all 1 ≤ i ≤ r and we have eliminated deficieny at r. We can now increase r by 1 and
continue with the procedure as above.
Definition 7.1.2. Let c : E → [0, ∞] be the capacity function. A flow from s to t is said to satisfy
the capacity constraints if f (e) ≤ c(e) and we call such a flow feasible.
One can suitably define in-flow and out-flow at a vertex and show that Kirchoff’s node law implies
that the in-flow and out-flow are equal at all vertices except the sink. We can further define value
of a flow v(f ) := d∗ f (s) and it can be shown using Kirchoff’s node law and anti-symmetry that
v(f ) = −d∗ f (t).
Let S ⊂ V . We call a pair (S, S c ) a (s, t)-cut if s ∈ S, t ∈
P
/ S. Defining C(X, Y ) = x∈X,y∈Y c(x, y),
f (X, Y ) = x∈X,y∈Y f (x, y), we can show that v(f ) = f (S, S c ) ≤ C(S, S c ) for any (s, t)-cut (S, S c )
P
and any feasible s − t flow. Thus, we have that sup{v(f ) : f is a (s, t)-flow} ≤ inf S:s∈S,t∈S c
/ C(S, S ).
45
46 Flows on networks, vertex and edge connectivity
It will soon become clear that Max-flow min-cut theorem is not a single theorem but a class of
theorems that can be proven under different frameworks using variants of the ideas we will use to
prove the above theorem - Theorem 7.1.3.
Lemma 7.1.4. If there exists an infinite capcity s − t path, then the max-flow is infinite and so is the
min-cut. Else, the min-cut is finite and so if the max-flow.
Proof. The proof of first path follows by constructing a sequence of flows with increasing strenghts.
We will prove the second part alone. Let there be no infinite capcity s − t path. We construct a finite
min-cut as follows : Choose an s − t path P1 and since it is not infinite, there exists e1 such that
c(e1 ) < ∞. Now repeat this procedure on G − e1 and choose an edge e2 in a s − t path P2 such that
c(e2 ) < ∞. Repeatedly, we can choose edges e1 , . . . , ek until G − {e1 , . . . , ek } has no s − t path. Let
S be the set of vertices in the component of s in G − {e1 , . . . , ek } and clearly t ∈ S c . Since s − t path
exists in G, we have that E ∩ (S × S c ) ⊂ {e1 , . . . , ek } and hence C(S, S c ) ≤ ki=1 c(ei ) < ∞.
P
X X
(d∗ f 0 )(v) = f 0 (e) = f (e) + f 0 (ei ) + f 0 (−ei−1 )
e:e− =v e:e− =v,e6=ei−1 ,ei
X
= f (e) + f (ei ) + f (−ei−1 ) = (d∗ f )(v) = 0.
e:e− =v,e6=ei−1 ,ei
We first present the Ford-Fulkerson which gives the idea of the proof.
Remark 7.1.7 (Ford-Fulkerson Algorithm ). Given a flow f , define the residual capacity cf (u, v) =
c(u, v) − f (u, v).
Step 1: Set f ≡ 0.
Step 2 : If there is a path P (called f -augmenting path from s to t in G such that cf (u, v) > 0
for all edges (u, v) ∈ P , then go to Step 3 else go to Step 6.
Max-flow min-cut theorem 47
Step 4 : For each edge (u, v) ∈ P , set f (u, v) = f (u, v) + cf (P ) and f (v, u) = f (v, u) − cf (P ).
Defining Sf be the set of vertices that can be reached from s with a path P such that cf (u, v) > 0 for
all edges (u, v) ∈ P , note that Step 2 can also be rephrased as follows : If t ∈ Sf go to Step 3 else go
to Step 6.
Proof. (Theorem 7.1.3) Suppose that the capacity function is bounded. We will prove the theorem
under this assumption and then argue the other case.
Let f be the max-flow whose existence is guaranteed by Lemma 7.1.5. Define
Exercise 7.1.8. The last part of the proof shows that Lemma 7.1.5 holds under the assumption of
finite min-cut alone i.e., the assumption of bounded capacity can be relaxed considerably. Give a direct
proof of this without using Max-flow Min-cut theorem.
Theorem 7.1.9. Assume that the min-cut is finite. F-F Algorithm terminates if the capacities are
integral and also gives that if capacities are integral, there is an integral maximal flow.
48 Flows on networks, vertex and edge connectivity
Proof. If capacities are integral, the min-cut is integral and so is the max-flow. At evey step the F-F
algorithm increases the flow strength by at least one as the residual capacity along any augmenting
path is a positive integer . Since the max-flow is finite, in finitely many steps the algorithm outputs
the max flow.
The algorithm starts with f ≡ 0 and at every step, the flow on any edge is increased/decreased by
the residual capcity if the edge is in a f -augmenting path. Since the residual capacity is integral, the
flow is always integral and so is the max-flow.
Example 7.1.10. The F-F algorithm need not terminate when capacities are non-integral and here is
an example. See https: // en. wikipedia. org/ wiki/ Ford_ Fulkerson_ algorithm# Non-terminating_
example
The definition of a flow can be extended to multiple sources and targets. Let S ⊂ V, T ⊂ V be
the sources and sinks respectively. The definition of flow can be modified by requiring the Kirchoff’s
/ S ∪ T and positive output at S i.e., s∈S (d∗ f )(s) ≥ 0. An S − T cut is
P
node law to hold for all x ∈
A ⊂ V such that S ⊂ A, T ⊂ Ac . We again have a max-flow min-cut theorem as follows.
Theorem 7.1.11 (Max-flow min-cut theorem for multiple sources and sinks.).
Proof. Let us again assume that the min-cut is finite i.e., there is no infinite capacity S − T path. The
trivial inequality follows as in the original theorem and also the fact that the supremum is maximum.
To argue the equality, instead of repeating the proof, we shall use a reduction. Define G0 as follows :
V 0 = V ∪ {a, b}, E 0 = E ∪ (a × S) ∪ (T × b). Set the capacity of the new edges to be infinite. Note that
any S − T cut in G is a a − b cut in G0 . Further, any a − b cut involving edges in E 0 \ E has infinite
capacity. Hence the min a − b cut in G0 and G are equal.
If f 0 is a a − b flow in G0 , let f be the restriction of f 0 to G. Clearly f is skew-symmetric and
satisfies Kirchoff’s node law. We verify the last condition as follows : By Kirchoff’s node law in G0 ,
we have that
X X X X
0= (d∗ f 0 )(s) = f (e) + f (e) = (d∗ f )(s) − (d∗ f 0 )(a).
s∈S e:e− ∈S,e+ ∈V e:e− ∈S,e+ =a s∈S
A more general version of max-flow min-cut theorem is as follows : Let G = (V, E) be a directed
graph, s, t ∈ V and c : E → [0, ∞) be a capacity constraint function. Then f : E → [0, ∞) is a s − t
flow satisfying capacity constraint c if
P P
1. e:e− =x f (e) = e:e+ =x f (e) for all x 6= s, t. (conservation of flow / Kirchoff ’s node law).
Vertex and edge connectivity 49
Exercise* 7.1.12. (Max-flow min-cut theorem for directed graphs.) Under the notation as above, we
have that
Exercise* 7.1.13. Use max-flow min-cut theorem for directed graphs to show the undirected version.
Exercise 7.1.14. What is the equivalent of Ford-Fulkerson algorithm for flows on directed graphs.
Exercise* 7.1.15. Does the F-F algorithm terminate is the capacities are rational ?
Exercise 7.2.1. Compute λ(G), κ(G) for the well-known graphs such as complete graph, path graph,
cycle graph, trees, Cayley graph, petersen graph et al.
Proof. Removing edges on the vertex with minimum degree proves the first inequality. By definition
κ(G) ≤ n − 1 where n = |V |. Let [S, S c ] := E ∩ (S × S c ) be the smallest edge-cut i.e., λ(G) = |[S, S c ]|
(see Exercise 7.2.4 as to why the smallest edge-cut will be of this form). Assume that S, S c 6= ∅ i.e.,
G is connected. We will show that κ(G) ≤ |[S, S c ]|.
50 Flows on networks, vertex and edge connectivity
and the LHS has cardinality at least |T |. In other words, we have selected all the edges from x to S c
and for all other z ∈ S − {x}, we have selected one edge each to get at least |T | edges.
Similar to Max-flow min-cut theorem, Menger’s theorem also can be proven under differing frameworks.
We shall prove two of them and leave the rest as exercises.
where E 0 is an s − t directed cut if every directed path from s to t contains at least one edge in E 0 .
The proof proceeds by first showing that deletion of incoming edges at s and outgoing edges at t
do not change the LHS or RHS. Then, we shall show that by setting c ≡ 1 on E, the LHS equals the
max-flow and the RHS equals the min-cut thereby proving the theorem via max-flow min-cut theorem
for digraphs.
Proof. Assume that λ(s, t) > 0 else the theorem holds trivially.
Observe that λ(s, t), C(s, t) remain unchanged if we delete incoming edges at s and outgoing edges
at t and so we shall assume that there are no incoming edges at s and outgoing edges at t.
Suppose there are k edge-disjoint paths. Then, any s − t edge-cut E 0 has to contain at least one
edge from each of the disjoint paths and hence |E 0 | ≥ k. Thus, we trivially get that λ(s, t) ≤ C(s, t)
as in the Max-flow Min-cut theorem.
Now we construct a directed network on G by assigning capacity 1 to every edge. Since c ≡ 1,
the max-flow f is integral and further f ∈ {0, 1}. Trivially, if v(f ) ≥ λ(s, t) as we can send a unit
flow along each of the disjoint paths and strength and flow properties are preserved under addition.
Consider the graph Gf := (V, {e : f (e) = 1}). Since v(f ) ≥ 1, there is a s − t path P1 in the graph
Gf (see Exercise 7.2.5 for a more general claim). Now consider the graph Gf − P1 . This is nothing
Vertex and edge connectivity 51
but the graph Gf1 := (V, {e : f1 (e) = 1}) where f1 (e) = f (e) − 1[e ∈ P1 ]. Since e → 1[e ∈ P1 ] is also
a flow, by linearity f1 is also a flow and further v(f1 ) = v(f ) − 1. Now, the same argument as above
yields that there is a s − t path P2 in Gf − P1 if v(f1 ) ≥ 1 and also P2 is edge-disjoint of P1 . Repeating
this argument, we can obtain v(f ) many edge-disjoint s − t paths i.e., λ(s, t) = M F (s, t) := max-flow
in G.
By our definition of capacity, note that C(S, S c ) = |E ∩ (S × S c )| for s ∈ S, t ∈
/ S. Since
{E 0 ⊂ E : G − E 0 disconnects s, t} ⊃ {E ∩ (S × S c ) : s ∈ S, t ∈
/ S, S ⊂ V },
Now, from the Max-flow Min-cut theorem for directed graphs (Exercise 7.1.12), we have that the
inequalities are all equalities and the proof is complete.
Exercise* 7.2.5. Prove the following more general version of the claim used in the above proof : If f
is a s − t flow with v(f ) > 0, then there exists a directed s − t path in the graph Gf := {e : f (e) > 0}.
Proof. One can try to prove a Max-flow Min-cut theorem for vertex capacities and then use the same
to prove Menger’s theorem. But we will use a graph transformation to reduce the vertex-connectivity
case to edge-connectivity case itself.
Consider the following enhanced graph G0 : V 0 := {s, t} ∪ {x1 , x2 : x ∈ V − {s, t}} and as for the
edges E 0 , s ∼ x1 if s ∼ x in G ; x2 ∼ t if x ∼ t in G ; x2 ∼ y1 if x ∼ y in G ; x1 ∼ x2 for all x1 , x2 .
Set c(x1 , x2 ) = 1 for all x and c ≡ ∞ otherwise.
Note that any directed s − t path in G0 is of the form sa1 a2 b1 b2 . . . z1 z2 t where sab . . . zt is the
corresponding directed s − t path in G. Further if P1 , P2 , . . . , Pk are vertex-disjoint s − t paths in G,
the corresponding paths in G0 are trivially edge-disjoint as well. Conversely, let P10 , P20 be two edge
disjoint paths in G0 , say sa1 a2 b1 b2 . . . z1 z2 t and sa01 a02 b01 b02 . . . z10 z20 t respectively. Then edge-disjointness
implies that (a1 , a2 ) 6= (a01 , a02 ), . . . , (z10 , z20 ) and similarly (b1 , b2 ) and so on. Thus, the corresponding
paths P1 , P2 in G are vertex-disjoint. Thus we have that
Exercise* 7.2.7. (Menger’s theorem for edge connectivity ; Menger (1927)) Let G be a finite undi-
rected graph and u and v two distinct vertices. Then the size of the minimum edge cut for v and v
(the minimum (u, v)-edge cut) is equal to the maximum number of pairwise edge-disjoint paths from
u to v.
Exercise* 7.2.8. (Menger’s theorem for vertex connectivity ; Menger (1927)) Let G be a finite undi-
rected graph, and let u and v be nonadjacent vertices in G. Then, the maximum number of pairwise-
internally-disjoint (u, v)-paths in G equals the minimum number of vertices from V (G) − u, v whose
deletion separates u and v.
We can prove Hall’s marriage theorem as well as Konig-Egervary theorem using max-flow min-cut or
Menger’s theorem. See exercises in the next section. Also, see Section 6.6.
The max-flow min-cut theorem can be derived from a more powerful theorem called the strong
duality theorem in linear programming (see https://en.wikipedia.org/wiki/Max-flow_min-cut_
theorem#Linear_program_formulation). This latter theorem for example can be used to prove
the Monge-Kantorovich duality theorem in Optimal transport theory (https://en.wikipedia.org/
wiki/Transportation_theory_(mathematics)).
7.4 Exercises
(a) Consider the maximum number of disjoint paths from S to T such that the paths do not
intersect even at S and T .
(b) Consider the maximum number of disjoint paths from S to T such that the paths are
allowed to intersect at S or T .
2. Show that edge connectivity and vertex connectivity are equal if ∆(G) ≤ 3.
3. If G is a connected graph and for every edge e, there are cycles C1 and C2 such that E(C1 ) ∩
E(C2 ) = {e} then G is 3-edge connected.
5. Let F be a non-empty set of edges in G. Prove that F is an edge-cut of the form F = E ∩(S ×S c )
for some S ⊂ V in G iff F contains an even number of edges from every cycle C.
6. If G is a k-connected graph and a new graph G0 is formed by adding a new vertex y with at
least k neighbours in G, then G0 is also k-connected.
7. Let G be a graph with at least 3 vertices. Show that G is 2-connected ⇔ G is connected and
has no cut-vertex ⇔ for all x, y ∈ V (G) there exists a cycle through x and y ⇔ δ(G) ≥ 1 and
every pair of edges lies in a common cyle.
8. Consider a directed graph G = (V, E) and s, t ∈ V . Further assume that there are no incoming
edges at s or no out-going edges at t. An elementary s − t flow is a flow f which is obtained by
assigning a constant positive value a to the edges on a simple directed s − t path and 0 to all
other edges. Show that every flow is a sum of elementary flows and a flow of strength 0.
9. Consider a directed graph G = (V, E) and s, t ∈ V . Further assume that there are no incoming
edges at s or no out-going edges at t. If f is an integral flow of strength k, show that there exist
k directed paths p1 , . . . , pk such that for all e ∈ E, |{pi : e ∈ pi }| ≤ f (e).
10. * (Generalized max-flow min-cut theorem :) Let G = (V, E) be a directed graph, s, t ∈ V and
c : (V − {s, t}) ∪ E → [0, ∞) be a capacity constraint function. Then f : E → [0, ∞) is a s − t
flow satisfying capacity constraint c if
P P
(a) e:e− =x f (e) = e:e+ =x f (e) for all x 6= s, t. (conservation of flow / Kirchoff ’s node law).
(b) For all e ∈ E, f (e) ≤ c(e). (edge capacity constraint).
P
(c) e:e+ =x f (e) ≤ c(x) for all x 6= s, t. (vertex capacity constraint.)
P P
(d) |f | := e:e− =s f (e) − e:e+ =s f (e) ≥ 0. (flow strength is non-negative.)
Show that
11. * Can you state the defective version of Hall’s marriage theorem as a max flow-min cut theorem
?
Question 7.4.1. Can you state Tutte’s 1-factor theorem also as a max-flow min-cut theorem ?
Chapter 8
Definition 8.1.1. (Coloring of a graph ) A graph is k-colorable if ∃ c : V → [k] such that c(u) 6= c(v)
if (u, v) ∈ E. The chromatic number χ(G) is defined as
Trivially, we have that cl(G) ≤ χ(G) ≤ ∆(G) + 1 where cl(G) is the size of the largest clique
(complete subgraph) in G. Further, a graph is k-colorable iff we can partition into k independent sets.
Let G be a graph on n vertices. Let P (G, q), q ∈ N∗ be the number of ways of coloring (properly) a
graph with q colours. More formally,
We will now show that this is a monic polynomial of degree n and other interesting properties.
Let G − e denote the graph with the edge e removed. Let G/e denote the graph with e contracted
i.e., if e = (v, w) then we replace the vertices v, w with v 0 and edges v ∼ x, w ∼ y with v 0 ∼ x, v 0 ∼ y.
Observe that G/e can be a multigraph. But this will not matter to us as proper colorings of a
multigraph and its corresponding simple graph are the same.
55
56 Chromatic number and polynomials
Proof. The proof follows by two simple observations that any proper q-coloring of G is a proper q-
coloring of G − e in which v and w receive distinct colours and any proper q-coloring of G − e in which
v and w receive same colours is a proper q-coloring of G/e.
If E = ∅, trivially P (G, q) = q n . Now using Proposition 8.2.1, we can inductively show that
P (G, q) is a polynomial in q. Hence, we shall define P (G, x), x ∈ R to be the polynomial such that
P (G, q) is the number of proper q-colorings of G for q ∈ N∗ . Here are some very important proeprties.
2. P (G, x) is the unique polynomial such that P (G, q) is the number of proper q-colorings of G for
q ∈ N.
Pn
4. i=1 ai = 0 or P (G, x) = xn .
5. an−1 = −|E|.
Proof. All the proofs will follow from deletion-contraction principle and induction. For abbreviation,
we call this DCI argument in this section.
2. The proof follows because two polynomials are equal if they agree at 1, . . . , n, . . . ,.
4,5 and 6 : We will apply the DCI argument again and prove (4),(5) and (6) together. The three identities
can be verified for E = ∅ or G has 1 or 2 vertices easily. Let
n
X n−1
X n
X
P (G − e, x) = (−1)i |bn−i |xn−i , P (G/e, x) = (−1)i |cn−1−i |xn−1−i = (−1)i−1 |cn−i |xn−i .
i=0 i=0 i=1
Pn
As for (4), observe that i=1 ai = P (G, 1) = no. of proper 1-coloring of G.If |E| =
6 ∅, there
exists no proper 1-coloring of G and so P (G, 1) = 0. Else P (G, x) = xn as required.
Chromatic Polynomials 57
(6) follows easily as an−i = (−1)i [|bn−i |+|cn−i |]. As for (5), observe by monicity of the chromatic
polynomial, induction and DC principle that that
k
Y
P (G, x) = P (Gi , x)
i=1
Proof. First equality follows trivially because any combination of q-colorings of each component gives
a q-coloring of the full graph G i.e.,
k
Y
P (G, q) = P (Gi , q), q ∈ N∗ .
i=1
|X| q β0 (X)
P
Proof. Enough to show that P (G, q) = X⊂E (−1) for q ∈ N∗ . Let c : V → [q] be an improper
coloring if c(u) = c(v) for some u ∼ v. Let IC be the set of all improper colorings. For e = (u, v)
define Be := {c ∈ IC : c(u) = c(v)}. Then we have that
To complete the proof, enough to prove that for ∅ ) X ⊂ E, | ∩e∈X Be | = q β0 (X) . Suppose X =
{e1 , . . . , ek } where ei = (ui , vi ), 1 ≤ i ≤ k. Let C1 , . . . , Cm be the components in (V, X) i.e., m = β0 (X)
. It is possible that ui = uj or vj . Thus, we have that
The latter is obtained by choosing one color for each component C1 , . . . , Cm and so its cardinality if
q m as required.
8.3 Exercises
1. Find the chromatic polynomial of complete graph Kn , cycle Cn , wheel graph Wn (i.e., the graph
obtained by adding a new vertex to Cn and connecting it to all the n vertices of Cn ), path graph
Pn and the Petersen graph.
m(m−1)
2. Show that the coefficient of xn−2 in the chromatic polynomial of a n vertex graph is 2 −T
where T is the number of triangles and m is the number of edges.
3. Show that a graph with n vertices is a tree iff P (G, x) = x(x − 1)n−1 .
Question 8.4.1 (Read’s conjecture, 1968 ). |a0 |, . . . , |an | form a log-concave sequence i.e., |ai |2 ≥
|ai−1 ||ai+1 |.
Verify the conjecture in the cases you can compute the coefficients such as Kn , Cn , Pn et al.
This was solved recently by June Huh (in his 1st year of Ph.D.) in 2012 using ideas from algebraic
geometry.
The crucial DC principle holds for more general combinatorial objects called Matroids (https://
en.wikipedia.org/wiki/Matroid) and analogous to a chromatic polynomial, one can define what is
called the characteristic polynomial of a matroid . The characteristic polynomial of a matroid is defined
analogously to the identity in Theorem 8.2.4. Read’s conjecture was generalized to characteristic
polynomial of matroids and known as Rota-Welsh conjecture . This was proved in 2015 by Karim
Adiprasito, June Huh, and Eric Katz. An accessible exposition of these ideas can be found in the blog
Matt Baker ([Baker 2015]) and a little more technical account in his survey ([Baker 2018]).
Chapter 9
See [Bapat 2010] for recollection of some basic linear algebra facts used.
Definition 9.1.1 (Incidence matrix ). ∂1 is a n × m matrix with ∂1 (i, j) = χej (vi ) where χe (v) =
→
−
1[v = e+ ] − 1[v = e− ] for e = (e− , e+ ) ∈ E , v ∈ V.
∂1 (i, j) e1 e2 ... em
v1 −1 0 ... 0
v2 1 −1 ... 1
.. .. .. .. ..
.
. . . .
vn 0 0 ... −1
Remark 9.1.2. We consider the matrices as a real matrices and our matrix algebra will be with
respect to R. One can consider any other field F instead of R as well.
59
60 Graphs and matrices
m
X X
−
∂ 1 : C1 → C0 , z = ai e i → ai (e+
i − ei ).
i=1 i
Pn
Suppose we represent ∂1 z = i=1 bi vi then (b1 , . . . , bn )t = ∂1 [a1 , . . . , am ]t . Further if the columns of
then (b1 , . . . , bn )t = m
P
∂1 matrix are C1 , . . . , Cm i=1 ai Ci .
Assume that G 6= ∅. We set Bi = Im(∂i+1 ), Zi = Ker(∂i ).
Remark 9.1.3. The above abstraction of representing C0 , C1 as formal sums seems a little unnecessary
→
−
as we could have also represented C0 , C1 as functions from V → R and E → R respectively. However,
this representation will be notationally convenient and is borrowed from algebraic topology where one
considers richer structures giving rise to further vector spaces such as C2 , C3 , . . .. Further, if we
choose Z-coefficients instead of R-coefficients, C0 , C1 are modules and not vector spaces. Thus, the
above representation allows us to carry over similar algebraic operations even in such a case. As
an exercise, the algebraically inclined students can try to repeat the proofs with Z-coefficients. See
[Edelsbrunner 2010] for an accessible introduction to algebraic topology and [Munkres 2018] for more
details.
Trivially, we have that B−1 = C−1 = R. Thus by the fundamental theorem of algebra, we have
that r(Z0 ) = n − 1 where by r(.) we denote the rank/dimension of a vector space. It is easy to see
that vi − vj ∈ Z0 , ∀i 6= j. Further, we have that v1 − vi , i = 2, . . . , n are linearly independent and hence
form a basis for Z0 . Observe that for z = m
P
i=1 bi ei ,
m
X m
X
∂0 ∂1 z = ∂0 ( bi (e+
i − e−
i )) = bi (∂0 e+ −
i − ∂0 ei ) = 0,
i=1 i=1
where in the first and third equalities we have used the definition of ∂1 , ∂0 respectively and in the
second equality, the linearity of ∂0 . Thus, we have that ∂0 ∂1 = 0 and in other words B0 ⊂ Z0 . The
same can also be deduced by noting that as matrix product ∂0 ∂1 is nothing but sum of rows of ∂1 and
hence 0.
Now, if we can understand r(B0 ) then we can understand all the ranks involved. Observe that B0
is nothing but the column space of of ∂1 i.e., denoting the columns of the matrix ∂1 by C1 , . . . , Cm ,
we have that
m
B0 ∼
X
={ ai Ci : ai ∈ R}.
i=1
Hence r(B0 ) is nothing but the maximum number of linearly independent column vectors.
Suppose e1 , . . . , ek denote the edges corresponding to the linearly independent columns. Assume
that the subgraph H = e1 ∪ . . . ∪ ek is connected. Let V (H) = {v1 , . . . , vl }. Consider the incidence
matrix restricted to {v1 , . . . , vl } × {e1 , . . . , ek }. Call it ∂10 . Since the non-trivial entries of ei ’s are on
v1 , . . . , vl , the columns of ∂10 are also linearly independent and so the column rank of ∂10 = k ≥ l − 1.
Let R1 , . . . , Rl be the rows of the matrix ∂10 . Since every column has exactly one +1 and one −1, we
have that i Ri = 0 and thus the row rank of ∂10 ≤ l − 1. Since the row rank = column rank, we have
P
Incidence matrix and connectivity 61
Proof. We have shown the ‘if’ part. We will show the ‘only if’ part. Suppose that H = e1 ∪ . . . ∪ ek
is acyclic. We will show that C1 , . . . , Ck are linearly independent by induction. The conclusion holds
trivially for k = 1. Suppose it holds for all l < k. WLOG assume that ek is a leaf edge with
degH (e+ +
/ e1 ∪ . . . ∪ ek−1 . If ki=1 ai Ci = 0 then since e+
P
k ) = 1. Thus ek ∈ k ∈ ek only, we have that
Pk−1
ak = 0. Thus i=1 ai Ci = 0 and since e1 ∪ . . . ∪ ek−1 are acyclic and hence by induction hypothesis,
we have that C1 , . . . , Ck−1 are linearly independent i.e., ai = 0, 1 ≤ i ≤ k.
Xk
B0 = { ai ∂1 (ei ) : ai ∈ R}.
i=1
3. G is connected iff B0 = Z0 .
By the rank-nullity theorem, we have that r(Z1 ) = m − n + β0 (G) which is nothing but the number
of excess edges i.e., edges added after forming a (maximal) spanning forest. Let e1 e2 . . . , ek form a
cycle, denoting the reversal of an edge by eb, set
k
→
−
1[ei ∈ E ]ei − 1[ebi ∈ →
−
X
zc := ei ]ebi .
i=1
Thus, it is easy to verify that ∂1 zc = 0 and hence zc ∈ Z1 . We can extend this further.
→
−
Theorem 9.1.6. Let F be a maximal spanning forest in G and e1 , . . . , el ∈ E − E(F ) where l =
m − n + β0 (G). Let C1 , . . . , Cl be the cycles in F ∪ e1 , . . . , F ∪ el respectively. Then zC1 , . . . , zCl form
a basis for Z1 .
Remark 9.1.7. Another challenging exercise : In terms of matrices, the linear transformation ∂0 , ∂1
represent right multiplication i.e., we have that
∂ ∂ ∂ ∂−1
{0} →2 C1 →1 C0 →0 R → {0}.
62 Graphs and matrices
But we could have considered left multiplication and in which case we will have linear transformations
δi ’s as follows :
2 δ 1 0δ δ δ−1
{0} ← C1 ← C0 ← R ← {0}.
Can you compute the ranks and characterize connectivity as we did above with ∂0 , ∂1 ’s ?
Theorem 9.1.8. Let G be a connected graph. Then r(M ) = n − 1 if G is bi-partite and r(M ) = n
otherwise.
The Laplacian matrix plays a fundamental role in many topics within probability, graph theory,
algebraic topology and analysis. See Section 9.4 for references.
Let D = diag[deg(1), . . . , deg(n)] be the diagonal matrix with entries as the degrees of vertices.
Recall that A is the adjacency matrix defined in Definition 3.4.1. The Laplacian matrix is defined as
L = D − A. Verify that L = ∂1 ∂1T using the matrix representation of ∂1 and its transpose. Observe
that even though ∂1 is oriented, L is unoriented. Further, it is easy to see that if we think of ∂1 as
linear transformations as in the previous sections, we have that
n
X n
X X
L : C0 → C0 ; : ai vi 7→ (ai deg(vi ) − aj )vi .
i=1 i=1 j:vj ∼vi
For purposes of our analysis, we will view L primarily as a matrix. Firstly observe that r(L) =
r(∂1 ∂1t ) ≤ r(∂1 ) ∧ r(∂1t ) = r(∂1 ). Trivially, we have that Ker∂1t ⊂ KerL.
Pn
We set the notation < x, y >= x.y = xy t = yxt for x, y ∈ Rn and kxk2 =< x, x >= 2
i=1 xi .
Observe that for x ∈ Rn , we have that
Hence, if x ∈ KerL then by the above equation ∂1t x = 0 i.e., x ∈ Ker∂1t i.e., KerL = Ker∂1t by
our earlier observation. Hence n(L) := r(KerL) = r(Ker∂1t ) = n − r(∂1 ) = β0 (G). Since n(L) is the
multiplicity of the eigenvalue 0, this shows that the multiplicity of 0 is β0 (G).
From (9.1), we also have that L is a positivie semi-definite matrix and hence all its eigenvalues are
non-negative. Denote the eigenvalues by λ1 ≤ λ2 . . . ≤ λn . Summarizing our above conclusions, here
is the theorem.
Incidence matrix and connectivity 63
Lemma 9.1.10. Let C1 , . . . , Cl be the components of G. Define x1 , . . . , xl as vectors that are constant
on each of the components respectively i.e., xik = 1[k ∈ Ci ]. Then, we have that x1 , . . . , xl form a basis
for KerL.
Proof. From the observation before the lemma, we have that k∂1t xk2 = − xj )2 and from
P
vi ∼vj (xi
(9.1), we know that if Lx = 0, ∂1t x = 0 and hence xi = xj for all vi ∼ vj . This implies that x is a
constant on each component and hence x1 , . . . , xl ∈ KerL. Since r(KerL) = l, it suffices to show that
x1 , . . . , xl are linearly independent. This follow easily as the vectors are supported on disjoint sets of
vertices.
We recall the famed Cauchy-Binet formula. For an n × m matrix A and a m × n matrix B with n ≤ m,
we have that
X
det(AB) = det(A[[n]|S])det(B[S|[n]]),
S⊂[m],|S|=n
where A[T |S] refers to the matrix A restricted to rows in T and columns in S. Let W = V − {1}
and we can apply the Cauchy-Binet formula to L[W |W ]. As L = ∂1 ∂1t , we have that L[W |W ] =
→
− →
−
∂1 [W | E ]∂1t [ E |W ].
X X
det(L[W |W ]) = det(∂1 [W |S])det(∂1t [S|W ]) = det(∂1 [W |S])2 ,
−
→ S⊂E:|S|=n−1
S⊂ E :|S|=n−1
where the last equality is due to the fact that ∂1 [W |S] = ∂1t [S|W ]. Since the sum of rows in ∂1 [V |S] is 0,
→
−
r(∂1 [W |S]) = r(∂1 [V |S]). Hence, if G is not connected, then r(∂1 [W | E ]) < n − 1 and det(L[W |W ]) =
0. Thus, if |E| < n − 1, the LHS and the RHS in the above equation are zero. Assume that G is
connected.
→
−
Fix S ⊂ E : |S| = n − 1. Then det(∂1 [W |S]) 6= 0 iff ∂1 [W |S] is non-singular iff ∂1 [W |S] has full
column rank iff r(∂1 [W |S]) = r(∂1 [V |S]) = n − 1 iff S is acyclic iff S is a spanning tree in G. Thus,
we have that
X
det(L[W |W ]) = det(∂1 [W |S])2 .
−
→
S⊂ E ,spanning tree
Suppose we show that det(∂1 [W |S]) ∈ {0, −1, +1}, then we have that
→
−
Lemma 9.1.11. For W ⊂ V, S ⊂ E such that |W | = |S|, we have that det(∂1 [W |S]) ∈ {0, −1, +1}.
Proof. We will prove by induction on k = |W | = |S|. Let B = ∂1 [W |S]. The case of k = 1 is trivial
as the entries are 0, −1, +1. Assume that the theorem holds for all l < k for k ≥ 2.
If each column of B has both a +1 and a −1 entry, then the sum of rows is 0 and detB = 0. If there
is a column of 0’s in B, then detB = 0 again. Hence, there is a column with only a single non-zero entry
in B. Suppose this is the (w, s)-th entry. Then, we have that detB = Bw,s det(∂1 [W − {w}|S − {s}]).
By induction, the latter is 0, −1, +1 and since Bw,s = +1, −1, we obtain the desiered conclusion.
as the sum of kthe principal minors of a matrix is equal to the sum of products of k eigenvalues.
Theorem 9.1.13. Let W ⊂ [n] and |W | = n − k. Then det(L[W |W ]) is the number of spanning
forests in G with k components and each of the elements of W c being in a distinct component.
From Lemma 9.1.11, we know that det(∂1 [W |S]) = 0, −1, +1 and hence it is enough to show that
∂1 [W |S] is non-singular iff the edges of S forms a forest with k components with each wi in a distinct
component.
Let the edges of S forms a forest with k components with each wi in a distinct component, say Di .
Let the edges in the ith component be Si . Set Wi = Di − wi . We have that ∪Wi = W and ∪Si = S.
Thus ∂1 [W |S] is a block matrix with blocks ∂1 [Wi |Si ], i = 1, . . . , k. By the proof of Corollary 9.1.12,
we have that det(∂1 [Wi |Si ]) ∈ {−1, +1} as (Di , Si ) forms a trees and hence det(∂1 [W |S]) ∈ {−1, +1}
If ∂1 [W |S] is non-singular, then the columns of S in ∂1 [V |S] are linearly independent and hence
the edges in S form a forest and they have k components as |S| = n − k. to be completed
Theorem 9.2.1.
X
detA = (−1)n−c1 (H)−c(H) 2c(H) .
H:H is a sp. el. subgraph
Proof. Denoting the symmetric group of permutations by Sn , we have that
X n
Y
n−|π|
detA = (−1) Aπ ; Aπ := Ai,π(i) ,
π∈Sn i=1
and |π| is the number of cycles in the cycle decomposition of π. Define the graph Hπ = {i ∼ j : π(i) =
j}. Aπ ∈ {0, 1}. We shall sketch the proof steps here and refer to [Bapat 2010, Theorem 3.8] for the
details.
Firstly, Aπ = 1 iff π(i) 6= i for any i iff Hπ is a elementary subgraph with the components in Hπ
corresponding to the cycles in π. Secondly, for an elementary subgraph H
as reversing the orientation in a cycle of π still yields that Hπ = H. Finally, since each cycle in π
corresponds to a component in Hπ , we have that |π| = c(H) + c1 (H). Thus, the proof is complete by
the determinantal formula above.
Recall that the characteristic polynomial is φλ (A) = det(λI − A) = ni=0 ci λn−i . Note that c0 = 1.
P
Further, we have that ck = (−1)k W ⊂[n],|W |=k det(A[W |W ]). Observe that A[W |W ] = A(HW ) where
P
and so
X
ck = (−1)k (−1)c1 (H)+c(H) 2c(H) .
H:H el. subgraph, |V (H)| = k
Trivially, c1 = 0.
Suppose that there is c3 = . . . = c2k−1 = 0. for some k ≥ 1.
If there is a cycle of length 3, then c3 6= 0 as every elementary subgraph on 3 vertices is a cycle.
Since c3 6= 0, there is no cycle of length 3. If there is a cyle of length 5, then there is an induced cycle
of length 3 or 5 but since there is no cycle of length 3, we have only induced cycle of length 5. Again,
since c3 = 0, all elementary subgraphs on 5 vertices are induced 5-cycle and so c5 6= 0, a contradiction.
Thus there are no induced 5-cycles.
Similarly, if there is a cycle of odd-length l, there is an induced cycle of odd-length at most l. But
recursively applying the above argument, there are no induced odd-length cycles of length strictly
smaller than l and hence the induced cycle is of length l. But this yields that cl 6= 0, a contradiction
for l ≤ 2k − 1. Thus if c3 = . . . = c2k−1 = 0, there are no odd-cycles of length at most 2k − 1.
66 Graphs and matrices
1. G is bipartite.
2. c2k+1 = 0, k = 0, 1, . . .
Proof. The obervations before the theorem prove that (i) is equivalent to (ii). To show (ii) is equivalent
to (iii), observe that (ii) implies that φA (λ) = m 2
Q
i=1 (λ − ai ) as φA (λ) has no odd coefficients. So, the
+ √
eigenvalues are − ai and since the eigenvalues are all real due to symmetry of A, we have that the
spectrum of A is symmetric.
Qm 2
Conversely, if the spectrum of A is symmetric then φA (λ) = i=1 (λ − a2i ) for some ai ∈ R and
thus φA (λ) has no odd coefficients.
9.3 Exercises
1. Let G be a graph with incidence matrix ∂1 and let B = (bij )i,j≤k be a (k × k)-submatrix of ∂1
which is nonsingular. Show that there is precisely one permutation σ of 1, . . . , k such that the
product ki=1 biσi is non-zero.
Q
2. Let δi = ∂iT be the transpose of the incidence matrices ∂i for i = 0, 1. Show that δi defines a
linear map from Ci−1 to Ci where C−1 = F, the underlying field. Further show that δ1 ◦ δ0 = 0.
Describe the spaces Im(δi ), Ker(δi ) for i = 0, 1. Can you express the number of connected
components in terms of the ranks of these spaces ?
3. Let G be a graph with k components. Suppose B is a (l × l)- submatrix of the incidence matrix
∂1 . with indexed by {e1 , . . . , el } and rows by {v1 , . . . , vl }. Further, Show that B is a non-
singular matrix only if l ≤ r(∂1 ),{e1 , . . . , el } form a forest. What is the subgraph {e1 , . . . , el }
when l = r(∂1 ) ?
4. Compute the eigenvalues of the Laplacian and Adjacency matrices of the the cycle graph and
the path graph.
5. Let G be a graph with n vertices, m edges and let λ1 ≥ . . . λn be the eigenvalues of the adjacency
matrix of G. Show the following bounds for the eigenvalues
q
(a) λ1 ≤ 2m(n−1)
n
Further reading 67
6. Let G be a connected graph on n vertices and m edges. Let ∂1 be its incidence matrix under
a fixed orientation of edges. Let y = (y1 , . . . , yn )T be a (n × 1)-column vector such that for
some i 6= j ∈ [n], yi = +1, yj = −1 and yl = 0 for l ∈ [n] \ {i, j}. Show that there exists a
(m × 1)-column vector x such that ∂1 x = y. Give a graph-theoretic interpretation of the same.
8. Compute the eigenvalues of the laplacian matrix L and adjacency matrix A of the Petersen
graph. Calculate the number of spanning trees of the Petersen graph. (Hint : Show that
A2 + A − 2I = J)
9. Compute the eigenvalues of L(G × H) in terms of L(G) and L(H) where G × H is the cartesian
product of G and H.
Planar graphs
Can we draw graphs on a paper without edges crossing each other ? We can draw K4 and so any
graph on at most 4 vertices can be drawn. What about K5 ? First, we shall clarify what we mean by
’drawing’.
Definition 10.0.1 (Planar embedding). An embedding of the graph G = (V, E) in the plane is a pair
of functionf f, fe , e ∈ E such that
• f : V → R2 is an injection.
• ∀e = (u, v) ∈ E, fe : [0, 1] → R2 is a continuous simple path such that fe (0) = f (u), fe (1) =
f (v), fe ([0, 1]) ∩ f (V ) = {f (u), f (v)}.
• for e 6= e0 , we have that fe ([0, 1]) ∩ fe0 ([0, 1]) = f (e ∩ e0 ) or equivalently fe ((0, 1) ∩ fe0 ((0, 1)) = ∅.
Since we are concerned with finite graphs, we shall assume that all fe ’s are polygonal line segments
i.e., union of finitely many straight lines.
We shall never specify fe directly but more via drawings. A plane graph is a graph G with
its embedding f, fe , e ∈ E. We shall view plane graphs as a subset of R2 by identifying G with
f (V ) ∪ ∪e∈E fe ([0, 1]) ⊂ R2 . We shall denote fe ([0, 1]) by e and fe ((0, 1)) by e̊. A graph isomorphic to
a plane graph is called a planar graph. We refer to [Diestel 2000, Chapter 4] for various topological and
geometric facts that shall be used in some of the proofs as well as more details. The most important
of these is the Jordan curve theorem.
Definition 10.0.2 (Faces). Let G be a plane graph. R2 − G is an open set and consists of finitely
many connected components. Each connected component is called a face.
Proposition 10.0.3. 1. For any face F and an edge e, e̊ ∩ ∂F = ∅ or e ⊂ ∂F . Thus we have that
∂F = ∪e:e̊∩∂F 6=∅ e.
69
70 Planar graphs
4. An edge e belongs to a cycle iff e ∈ ∂F1 ∩ ∂F2 for two distinct faces F1 , F2 .
A corollary of the above proposition is that a forest has only one face. Further, we define the
length of a face F as
X
l(F ) := (1[e is in a cycle] + 21[e is a cut-edge]).
e∈∂F
One can define a notion of ’traversal’ (i.e., a closed walk) of a face such that the length of the closed
walk is the length of the face.
P
Exercise 10.0.4. F l(F ) = 2|E|.
Proof. The proof proceeds by showing the formula for forests and then inductively verifying it for
connected graphs. For forest, observe that there is only one face because of the definition of the length
of a face, that every edge is a cut-edge, v − e = β0 (G) and Exercise 10.0.4.
If the formula holds for trees, then one can show that if G, H are two disjoint plane graphs then
there exists u ∈ G, v ∈ H such that G0 = G ∪ H ∪ (u, v) is also a plane graph with the number of faces
being f 0 (G) = f (G) + f (H) − 1.
Proof. Note that if ∂F corresponds to a cycle, l(F ) ≥ 3 and if not l(F ) ≥ 4 as v ≥ 3 and G is
connected. Thus, we have by Euler’s formula that
X
3(2 − v + e) = 3f ≤ l(F ) = 2e
F
and so e ≤ 3v − 6. Now, if G has no triangles, then l(F ) ≥ 4 for all F and the above inequality yields
that e ≤ 2v − 4.
Using the above lemma, we can show that K3,3 and K5 are not planar. A direct constructive proof
using some geometric arguments are also possible for non-planarity of K3,3 and K5 .
Coloring of Planar graphs 71
Theorem 10.1.3 (Kuratowski’s theorem). A graph if planar iff it has no K3,3 or K5 minor.
A graph is a triangulation if it is connected and l(F ) = 3 for every face F . A planar graph is said
to be maximal if addition of any edge makes it non-planar.
1. e = 3v − 6 and G is connected
2. G is a triangulation.
Proof. The equivalence of (i) and (ii) is by noting that the inequality in the proof of Lemma 10.1.2 is
an equality in this case.
Exercise 10.1.5. Show that (ii) and (iii) equivalent in the above lemma.
Proof. If every vertex has degree at least 6, then 2e ≥ 6v which contradicts Lemma 10.1.2.
Using the above lemma and induction, one can prove that a planar graph is 5-colorable. But we
can do better with a little more arguments.
Proof. The theorem is trivially true for graphs with at most 5 vertices. By induction, we assume that
the theorem holds for all graphs with at most n vertices.
Let G be a planar graph with n + 1 vertices. By Lemma 10.2.1, there is a vertex v of degree at
most 5. Choose the same.
CASE 1 : If deg(v) < 5, then by induction G − v is 5-colorable and we can assign v a color not
assigned to any of its (at most 4) neighbours.
CASE 2 : Suppose deg(v) = 5. Let N (v) = {v1 , . . . , v5 }. Since K5 is not planar, v1 , . . . , v5 cannot
form a complete graph and hence WLOG, assume that v1 v2 . Construct a new planar graph G0 by
contracting the edge (v1 , v) and denoting the new vertex by v 0 . Further construct a planar graph G00
72 Planar graphs
by contracting the edge (v 0 , v2 ) and denoting the new vertex v 00 . Since G00 has n − 1 vertices and is
planar, it is 5-colorable.
To construct a 5-coloring on G : Assign the same colors as in G00 to all vertices in G except v, v1 , v2 .
Further assign the color of v 00 to v1 and v2 . Now, N (v) has been colored using only 4 colors and so
assign the 5th color to v.
Exercise 10.3.1. Show that the following are equivalent for a plane graph G.
1. G is bi-partite
10.4 Exercises
1. Merging two adjacent vertices of a planar graph yields another planar graph.
2. Any embedding of a planar graph will have the same number of faces.
3. Show that any connected triangle-free planar graph has at least one vertex of degree three or
less. Prove by induction on the number of vertices that any connected triangle-free planar graph
is 4-colorable.
4. Show that every planar graph with at least 4 vertices has at least 4 vertices of degree less than
or equal to 5. (Hint : Consider a maximal planar graph.)
• G∗ is connected.
• If G is connected, then each face of G∗ contains exactly one vertex of G.
74 Planar graphs
2. Prove that a set of edges T ⊂ E(G) in a connected plane graph form a spanning tree iff the
duals of the edges E(G) − T form a spanning tree of G∗ .
3. Prove that every n-vertex self-dual plane graph has 2n − 2 edges. For all n ≥ 4, construct a
simple n-vertex self-dual plane graph.
[Addario-Berry 2015] Addario-Berry L. ”Partition functions of discrete coalescents: from Cayley’s formula to Frieze’s
ζ(3) limit theorem.” In XI Symposium on Probability and Stochastic Processes. (2015) (pp. 1-45). Birkhäuser, Cham.
[Aigner et al. 2010] Aigner, M., Ziegler, G. M., Hofmann, K. H., & Erdos, P. (2010). Proofs from the Book (Vol. 274).
Berlin: Springer.
[Alon and Spencer 2004] Alon, N., and Spencer, J. H. (2004). The probabilistic method. John Wiley and Sons.
[Babai 1992] Babai, László, and Péter Frankl. Linear Algebra Methods in Combinatorics: With Applications to Geom-
etry and Computer Science. Department of Computer Science, univ. of Chicag, 1992.
[Baker 2018] Baker, M. (2018). Hodge theory in combinatorics. Bulletin of the American Mathematical Society, 55(1),
57-80.
[Bapat 2010] Bapat, R. B. (2010). Graphs and matrices (Vol. 27). London: Springer.
[Bauerschmidt et al. 2012] Bauerschmidt, Roland, Hugo Duminil-Copin, Jesse Goodman, and Gordon Slade. ”Lectures
on self-avoiding walks.” Probability and Statistical Physics in Two and More Dimensions (D. Ellwood, CM Newman,
V. Sidoravicius, and W. Werner, eds.), Clay Mathematics Institute Proceedings 15 (2012): 395-476.
[Bertsimas 1990] Bertsimas, Dimitris J., and Garrett Van Ryzin. ”An asymptotic determination of the minimum span-
ning tree and minimum matching constants in geometrical probability.” Operations Research Letters 9, no. 4 (1990):
223-231.
[Bollobas 2013] B. Bollobás, Modern graph theory, volume 184, (2013), Springer Science & Business Media.
[Bond and Levine 2013] B. Bond and L. Levine. (2013) Abelian Networks : Foundations and Examples,
arXiv:1309.3445v1.
[Clay and Margalit 2017] Clay, Matt, and Dan Margalit, eds. Office Hours with a Geometric Group Theorist. Princeton
University Press, 2017.
[Choudum 1986] Choudum, S. A. ”A simple proof of the Erdos-Gallai theorem on graph sequences.” Bulletin of the
Australian Mathematical Society 33, no. 1 (1986): 67-70.
[Corry and Perkinson 2018] S. Corry and D. Perkinson. Divisors and Sandpiles: An Introduction to Chip-Firing, Volume
114, (2018), American Mathematical Soc.
[Diestel 2000] R. Diestel, Graph theory, (2000), Springer-Verlag Berlin and Heidelberg GmbH.
75
76 Bibliography
[Edelsbrunner 2010] Edelsbrunner, H., & Harer, J. (2010). Computational topology: an introduction. American Math-
ematical Soc..
[Frieze 1985] Frieze, Alan M. ”On the value of a random minimum spanning tree problem.” Discrete Applied Mathe-
matics 10, no. 1 (1985): 47-56.
[Frieze and Pegden 2017] Frieze, Alan, and Wesley Pegden. ”Separating subadditive Euclidean functionals.” Random
Structures & Algorithms 51, no. 3 (2017): 375-403.
[Gale and Shapley 1962] Gale, David, and Lloyd S. Shapley. ”College admissions and the stability of marriage.” The
American Mathematical Monthly 69, no. 1 (1962): 9-15.
[Godsil and Royle 2013] Godsil, C., and Royle, G. F. (2013). Algebraic graph theory (Vol. 207). Springer Science &
Business Media.
[Grigoryan 2018] Grigoryan, A. (2018). Grigor’yan, A. (2018). Introduction to Analysis on Graphs (Vol. 71). American
Mathematical Soc..
[Grochow 2019] Grochow, Joshua. ”New applications of the polynomial method: The cap set conjecture and beyond.”
Bulletin of the American Mathematical Society 56, no. 1 (2019): 29-64.
[Guth 2016] Guth, Larry. Polynomial methods in combinatorics. Vol. 64. American Mathematical Soc., 2016.
[Jukna 2011] Jukna, Stasys. Extremal combinatorics: with applications in computer science. Springer Science & Business
Media, 2011.
[Lovasz 2011] Lovász, László. ”Graph Theory Over 45 Years.” In An Invitation to Mathematics, pp. 85-95. Springer,
Berlin, Heidelberg, 2011.
[Pitman 1999] Pitman, Jim. ”Coalescent random forests.” Journal of Combinatorial Theory, Series A 85, no. 2 (1999):
165-193.
[Perkinson 2011] D. Perkinson, J. Perelman and John Wilmes. Primer for the Algebraic Geometry of Sandpiles,
arXiv:1112.6163.
[Rhee 1992] Rhee, Wansoo T. ”On the travelling salesperson problem in many dimensions.” Random Structures &
Algorithms 3, no. 3 (1992): 227-233.
[Smirnov 2011] Smirnov, Stanislav. ”How do research problems compare with IMO problems?.” In An Invitation to
Mathematics, pp. 71-83. Springer, Berlin, Heidelberg, 2011.
[Spencer 1994] Spencer, J. (1994). Ten lectures on the probabilistic method (Vol. 64). SIAM.
[Steele 1997] Steele, J Michael. ”Probability theory and combinatorial optimization.” vol. 69, (1997) : SIAM.
[Terras 2010] Terras, A. (2010). Zeta functions of graphs: a stroll through the garden (Vol. 128). Cambridge University
Press.
Bibliography 77
[Tripathi 2010] Tripathi, Amitabha, Sushmita Venugopalan, and Douglas B. West. ”A short constructive proof of the
Erdős–Gallai characterization of graphic lists.” Discrete Mathematics 310, no. 4 (2010): 843-844.
[Van Lint and Wilson] van Lint, Jacobus Hendricus and Wilson, Richard Michael A course in combinatorics, 2001,
Cambridge university press.
[West 2001] D. B. West, Introduction to graph theory, Volume 2 (2001), Prentice hall Upper Saddle River.