0% found this document useful (0 votes)
76 views83 pages

Graph Theory for Undergraduates

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views83 pages

Graph Theory for Undergraduates

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Graph Theory - Lecture notes.

D. Yogeshwaran
Indian Statistical Institute, Bangalore.

April 28, 2019


ii
Contents

Index v

1 Disclaimers and Warnings 1

2 Introduction to Graphs 3
2.1 Definition and some motivating examples . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Popular examples of graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Some history and more motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Course overview : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Flows, Matchings and Games on Graphs . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Graphs and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.3 Random graphs and probabilistic method. . . . . . . . . . . . . . . . . . . . . . 12

3 The very basics 13


3.1 Some useful classes of graphs : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Some graph constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Some basic notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Graphs as metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 Graphs and matrices : A little peek . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 Euler’s theorem and König’s theorem on bi-partite graphs . . . . . . . . . . . . . . . . 18
3.6 ***Some questions*** . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Spanning Trees 21
4.1 Trees and Cayley’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Minimal spanning trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Kruskal and other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4 ***Some questions : Random spanning trees*** . . . . . . . . . . . . . . . . . . . . . 27

5 Extremal Graph Theory 29


5.1 Existence of complete subgraphs and Hamiltonian circuits . . . . . . . . . . . . . . . . 29
5.2 ***Probabilistic Method : An introduction*** . . . . . . . . . . . . . . . . . . . . . . 32

iii
iv Contents

5.3 ***Some other methods*** . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Matchings, covers and factors. 35


6.1 Hall’s marriage theorem, Koenig’s, Gallai’s and Berge’s theorems . . . . . . . . . . . . 35
6.2 Graph factors and Tutte’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.3 ***Gale-Shapley Stable marriage/matching algorithm*** . . . . . . . . . . . . . . . . 42
6.4 ***Shannon rate of communication*** . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.5 ***Erdös-Gallai Theorem*** . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.6 ***Equivalent theorems to Hall’s matching theorem and more applications*** . . . . . 44

7 Flows on networks, vertex and edge connectivity 45


7.1 Max-flow min-cut theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.2 Vertex and edge connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2.1 Menger’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.3 ***Some applications*** . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

8 Chromatic number and polynomials 55


8.1 Graph coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.2 Chromatic Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.4 ***Read’s conjecture and Matroids*** . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

9 Graphs and matrices 59


9.1 Incidence matrix and connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.1.1 Laplacian Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
9.1.2 Spanning trees and Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
9.2 More properties of Adjacency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
9.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
9.4 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

10 Planar graphs 69
10.1 Euler’s formula and Kuratowski’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.2 Coloring of Planar graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
10.3 ***Graph dual**** . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
10.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
10.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Index

0-1 incidence matrix, 62 Eulerian graph, 18

adjacency matrix, 17 f-factor, 40


augmenting path, 38 faces, 69
automorphism, 14 factor, 38
flow, 45
Berge theorem, 38
Ford-Fulkerson Algorithm, 46
Forests, 21
Cauchy-Binet formula, 63
Cayley graphs, 13
Gale-Shapley algorithm, 42
Cayley’s formula, 21
Gallai theorem, 37
chromatic number, 55
girth, 29
chromatic polynomials, 55
Graph, 3
coloring, 55
graph dual, 71
Complete graph, 13
Graph invariant, 15
connected components, 16
graph metric, 17
cover, 36
graph minor, 70
cut-edge, 23
Graph property, 15
cut-vertex, 23
Growth of groups, 19
cycle, 16
Hall’s marriage theorem, 35
Degree, 15
history, 5
deletion-contraction principle, 55
homomorphism, 14
diameter, 17
Directed graphs, 3 incidence matrix, 59
independent sets, 36
edge connectivity, 49
induced subgraph, 14
elementary subgraph, 64
Intersection graph, 13
embedding, 69
isolated, 15
equivalences to Hall’s theorem, 44
isomorphism, 14
Erdös-Gallai Theorem, 43
Euclidean Lattices, 13 Kirchoff’s matrix tree theorem, 64
Euler’s formula, 70 Kirchoff’s node law, 45
Euler’s theorem, 18 Konig’s theorem, 37

v
vi Index

Kuratowski’s theorem, 70

Laplacian matrix, 62

Mantel’s theorem, 29
matching, 35
matroid, 58
max-flow min-cut theorem, 45
Menger’s theorem, 50
Minimal Spanning Tree, 24
Minimal spanning tree algorithms, 25
Multi-graphs, 3

Neighbourhood, 15

Path, 16
perfect matching, 35
planar graph, 69
probabilistic method, 32
Prufer code, 21

Ramsey numbers, 32
random spanning trees, 27
Read’s conjecture, 58
Real-life graphs, 4
Regular graph, 15
Rota-Welsh conjecture, 58

Shannon rate, 42
Spanning subgraph, 15
Stable matching, 42

tree counting theorem, 22


Tree counting via Laplacians, 64
Trees, 21
triangulation, 70
Turan’s theorem, 30
Tutte’s factor theorem, 39

vertex connectivity, 49

walk, 16
Chapter 1

Disclaimers and Warnings

• These notes are highly incomplete, not well-written, not well ’latexed’ and not meant to be taken
seriously like a book.

• The notes might fail on many aesthetic counts and the author has put little effort to ensure that
the notes are more pleasing to the eye.

• These are written for the introductory course on graph theory to second year undergraduate
students. http://www.isibang.ac.in/~adean/infsys/database/Bmath/GT.html

• These are compendium of some of the class material (mostly definitions and result statements)
to serve as a pointer to students. I hope to add proof details over the years.

• These are mostly a subset of the material covered in class.

• All figures are taken from various sources and sorry for not acknowledging them all.

• I am thankful to students of the course in the years 2018 and 2019 for pointing out errors,
inaccuracies and also listening to my lectures :-)

• Unless mentioned, the proofs or results are all borrowed from some book or the other. Few of
them are listed in the bibliography.

• Do not look for any new results or proofs or something else fancy here. Just stuff arranged
according to my own teaching convenience.

• I have tried to mention some related research-level questions and open problems at the end of
each chapter.

• The main references are [Bollobas 2013, Van Lint and Wilson, West 2001, Diestel 2000].

1
2 Disclaimers and Warnings
Chapter 2

Introduction to Graphs

2.1 Definition and some motivating examples


V

Some notation : [n] = {1, . . . , n}, V × V the usual set product, 2 denote unordered pairs of distinct
elements in V .

Definition 2.1.1. (Graph). A (simple) graph G consists of a finite or countable vertex set V := V (G)
and an edge set E := E(G) ⊂ V2 .


We shall consider only locally-finite graphs i.e., graphs such that vertices occur only
in finitely many edge-pairs.

For a vertex set V , we represent edges as (v, w), v, w ∈ V. We also write v ∼ w to denote that
(v, w) ∈ E. A very common pictorial representation of graphs is as follows : Vertices are represented
as points on plane and edges are lines / curves between the two vertices. See Figure 2.11 . As an
exercise, explicitly define the graphs based on these representations.
See these two talks by Hugo Touchette
(http://www.physics.sun.ac.za/~htouchette/archive/talks/networks-short1.pdf ) and
(http://www.physics.sun.ac.za/~htouchette/archive/talks/complexnetworks1.pdf ) for many
examples and applications of graphs in real-life problems. We shall briefly touch upon these in our
examples as well.
The following two variants shall be hinted upon in our motivation and some examples but we shall
not discuss the same for most of the course.

Remark 2.1.2 (Two variants).

Directed graphs: These are graphs with directed edges or equivalently the edge-pairs are ordered

Multi-graphs: These are graphs with multiple edges between vertices including self-loops.
1
All of the figures in these notes are not mine and taken from the internet

3
4 Introduction to Graphs

Figure 2.1: Some representations of graphs on vertex sets of cardinalities 3 and 4.

2.1.1 Popular examples of graphs

Let us see some popular examples of graphs.

Example 2.1.3. (Facebook graph) V is the set of Facebook users and an edge is places between two
vertices if they are friends of each other. See Figure 2.1.3

Example 2.1.4. (Road networks) V is the set of cities and an edge represents roads/trains/air-routes
between the cities. See Figure 2.1.4

Example 2.1.5. Collaboration graph V is the set of all mathematicians who have published (say listed
on mathscinet) and an edge represents that two mathematicians have collaborated on a paper together.

Example 2.1.6. (Complexity of Shakespeare’s plays) V represents the characters in a shakespeare


play and an edge between two characters means that both appeared in a scene together. See Figures
2.1.6 and 2.1.6 for the graph of Othello and Macbeth. See http: // www. martingrandjean. ch/
network-visualization-shakespeare/ for more details. Network density is the ratio |E|/|V | where
|.| denotes the cardinality of a set.

More examples of networks abound in biology


(https://en.wikipedia.org/wiki/Biological_network), chemical reaction networks
(https://en.wikipedia.org/wiki/Chemical_reaction_network_theory) et al. We shall later give
a more historical perspective with some more examples.

Definition 2.1.7. (Weighted graphs) A graph G with a weight function w : E → R.


Some history and more motivation 5

Figure 2.2: A snapshot of facebook graph

Often the weights are non-negative.

Example 2.1.8. (Traffic Networks) G is the road network with the weight w denoting average traffic
in a day. See Figure 2.1.8

Example 2.1.9. (Football graph) This is an example of weighted directed graph. Let the vertex set
be the 11 players in a team and directed edge from i to j represents that player i has passed to player
j. Associated with such a directed edge (i, j), a weight w(i, j) that denotes the number of passes from
player i to player j. See the football graph from the Spain vs Holland final in 2010 WC in Figure 2.1.9
This illustrates best the appealing visualization offered by graphs. Spain’s passing game is very evident
in the graph and gives a good way to analyse such effects in sports and other domains.

2.2 Some history and more motivation


Example 2.2.1. (Konigsberg Problem. Euler, 1736) The problem was to find a path starting at any
point that traverses through all the bridges exactly once and return to the starting point (See figure
2.2.1). After many attempts in vain, Euler showed that this is not possible. This problem is considered
the birth of both probability and topology. We shall see Euler’s solution later.

Example 2.2.2. (Electrical Networks. Kirchoff, 1847) Electrical networks can be represented as
weighted directed graphs with current and resistance viewed as weights. This formalism can explain
6 Introduction to Graphs

Figure 2.3: Indian railway network


Some history and more motivation 7

Figure 2.4: Collaboration graph of mathematicians based on mathscinet

Figure 2.5: Othello characters graph


8 Introduction to Graphs

Figure 2.6: Macbeth characters graph

Figure 2.7: A road network with traffic density


Some history and more motivation 9

Figure 2.8: Football graph from the Spain vs Holland 2010 WC Final.

Figure 2.9: Seven bridges of Konigsberg on the river Pregel.


10 Introduction to Graphs

Kirchoff ’s and Ohm’s laws. This connection between graphs and electrical networks is highly useful
not only for graph theory and electrical networks but also used in random walks and algebraic graph
theory. It is not entirely inaccurate to talk of Kirchoff ’s formalism also as discrete cohomology.

Example 2.2.3. (Chemical Isomers. Cayley, 1857) Atoms were represented by vertices and bonds
between atoms by an edge. Such a representation was used to understand the structure of molecules.
We shall see Cayley’s tree enumeration formula which was used in enumeration of number of chemical
isomers of a compound. See Figure 2.2.3.

Figure 2.10: Hydrocarbons represented as graphs.

Example 2.2.4. (Tour of cities. Hamilton, 1859) Given a set of cities and roads between them, find
a path that starts at one city, visits all the cities exactly once and returns to the starting city. Can we

Figure 2.11: Hamilton cycle of a graph

guarantee such a path exists for all road networks ?


Course overview : 11

Example 2.2.5. (Four colour theorem) Suppose we take a map of the world and assume that countries
are contiguous land masses. Let countries be vertices and edges are drawn between two countries share
a boundary. Can we colour countries such that neighbouring countries have different colours ? What
is the minimum number of colours required for the same ?
A more general question, can any graph be drawn as a map i.e., can any graph be drawn on the
plane such that edges do not cross each other ?

2.3 Course overview :

Already, we have seen some historic questions that shall be discussed in the course. Now, we shall see
something more specific.

2.3.1 Flows, Matchings and Games on Graphs

Example 2.3.1 (Maximum traffic flow). Consider the traffic network (weighted graphs) and take a
starting point (source) and ending point (sink). What is the maximum amount of traffic that can flow
from source to the sink in one instance ?

The solution is very famously known as the max flow-min cut theorem and has many applications.

Example 2.3.2 (Hostel room allocation). There are n rooms and m students. Each student gives a
list of rooms acceptable to them. Can the warden allot rooms to students such that each student gets
at least one room in their list ?

Hall’s marriage theorem shall give a deceptively simple sufficient and necessary condition for the
same. Hall’s marriage theorem shall be proved using max flow-min cut theorem.

Figure 2.12: An example of a hide and seek scenario.


12 Introduction to Graphs

Example 2.3.3 (Hide and Seek game). Consider an area with horizontal main roads and vertical
cross roads (See the grid in Figure 2.3.3). There are safe houses at certain intersections, marked by
crosses in the figure. A robber choses to hide at one of the safe houses. A cop wants to find the main
road or the cross road in which the robber stays. What is the best strategy for the cop to succeed ?
What is the best strategy for robber to defeat the cop ? We shall exhibit a strategy for both using Hall’s
marriage theorem.

2.3.2 Graphs and matrices

A graph G on V can be represented as a V × V matrix with 1 in the column (v, w) iff (v, w) is an
edge in G. One can represent weighted and directed graphs as well. What does rank and nullity mean
here ? Are there other matrices that encode the properties of graphs ? This viewpoint shall connect
Kirchoff’s formalism with cohomology theory.

2.3.3 Random graphs and probabilistic method.

If time permits, we shall mention random graphs. We fix [n] as the vertex set and choose edges at
random i.e., at each edge toss a coin and place the edge if it lands heads. What are the properties of
these graphs ?
Probabilistic method : Suppose we want to show that there exists graphs with a certain
property, let us show that the random graph satisfies the property with positive probability. Thus,
the set of graphs satisfying the property is non-empty. This approach was pioneered by Erdös and is
still remarkably successful in proving various results.
I shall try to emphasize various mathematical topics (such as cohomology) that show up in their
simplest incarnation in graph theory and also, many mathematical tools such as linear algebra, analysis
and probability are used to study graphs. A very readable survey on the growing importance of graph
theory is [Lovasz 2011].
Chapter 3

The very basics

3.1 Some useful classes of graphs :


V

Example 3.1.1 (Complete graph). Kn : V = [n], E = 2 .

Example 3.1.2 (Intersection graph). Let S1 , . . . , Sn be subsets of a set S. Define G with V = [n]
and i ∼ j if Si ∩ Sj 6= ∅.

Example 3.1.3 (Delaunay graph). P ⊂ Rd , d ≥ 1 - finite distinct set of points. For y ∈ Rd , d(y, P ) :=
minx∈P |x − y|. Define Cx := {y : d(y, P ) = |x − y|}, x ∈ P . Delaunay graph is the intersection graph
on P with intersecting sets Cx , x ∈ P. Cx is called as the voronoi cell of x.

Example 3.1.4 (Euclidean Lattices). Let Br (x) be the closed ball of radius r centered at x. The
d-dimensional integer lattice is the intersection graph formed with Zd as vertex set and B1/2 (z), z ∈ Zd
as the intersecting sets. Alternatively, z ∼ z 0 if di=1 |zi − zi0 | = 1.
P

Example 3.1.5 (Cayley graphs). H, + be a group with a finite set of generators S such that S = −S
(symmetric). The Cayley graph G is defined with vertex set V = H and x ∼ y if x − y ∈ S. Since S
is symmetric, y − x ∈ S iff x − y ∈ S and so abelianness of the group does not matter.

Exercise 3.1.6. Show that every graph is an intersection graph.

Exercise 3.1.7. Show that Euclidean lattices are Cayley graphs. Find the generators S.

3.1.1 Some graph constructions

Example 3.1.8 (Bi-partite graph). Graphs (V, E) such that V = V1 t V2 and E ⊂ V1 × V2 .

Example 3.1.9 (Complementary graph). Let G = (V, E) be a graph. The complementary graph is
Gc = (V, [V2 ] − E).


Example 3.1.10 (Line graph). Let G = (V, E) be a graph. The line graph is L(G) with vertex set
V = E and e1 ∼ e2 is they are adjacent in G.

13
14 The very basics

[5]

Example 3.1.11 (Petersen graph). The vertex set of the graph is 2 and {i, j} ∼ {k, l} if {i, j} ∩
{k, l} = ∅.

Exercise 3.1.12. Show that the Petersen graph is the complement of the line graph of K5 .

Exercise 3.1.13. Are the following three graphs isomorphic to Petersen graph ?

Exercise 3.1.14. Find the number of edges in a Line graph L(G) in terms of the number of edges in
G?

3.2 Some basic notions


We shall assume that all our graphs are finite unless mentioned otherwise. In some
examples, we shall illustrate things using infinite graphs but all our results are for finite
graphs only.
Fix a graph G with vertex set V and edge set E. We say v, w are neighbours if v ∼ w.

Definition 3.2.1 (Graph homomorphism and isomorphism ). Suppose G, H are two graphs. A func-
tion φ : V (G) → V (H) is said to be a graph homomorphism if x ∼ y implies φ(x) ∼ φ(y). φ is said to
be an isomorphism if φ is a bijection and x ∼ y iff φ(x) ∼ φ(y) i.e., φ, φ−1 are graph homomorphisms.
G and H are isomorphic (G ∼ = H) if there exists an isomorphism between G and H. An automorphism
is an isomorphism φ : G → G.

Exercise 3.2.2. The set of all automorphisms of G is called Aut(G). Define a binary operation on
Aut(G) as followss : For g, f ∈ Aut(G), g.f = g ◦ f i.e., the composition operation. Is Aut(G) a
group ?

Essentially, G and H are the same graphs upto re-labelling. H is a subgraph of G if V (H) ⊂ V (G)
and E(H) ⊂ E(G). H is an induced subgraph of G if H is a subgraph of G and if v, w ∈ H such that
(v, w) ∈ E(G) then (v, w) ∈ E(H).

Exercise* 3.2.3. H is an induced subgraph of G iff H is the maximal subgraph in G with vertex set
V (H).

Example 3.2.4 (Trivial homomorphisms). The identity automorphism is a trivial homomorphism ;


If H ⊂ G, then the inclusion map from V (H) to V (G) gives rise to a homomorphism.

Exercise 3.2.5. Show that there exists a homomorphism from G to K2 iff G is bi-partite.
Some basic notions 15

Exercise* 3.2.6. Can one characterize classes of graphs G which have homomorphisms to Kk ?

Exercise* 3.2.7. Denote by Hom∗ (H, G) to be the number of injective homomorphisms from H to
G. Let |V (H)| = k. Show that

6=
X

|Hom (H, G)| = 1[H ⊂< {v1 , . . . , vk } >],
(v1 ,...,vk )∈V (G)k

P6=
where denotes that the sum is over distinct elements and < v1 , . . . , vk > is the induced subgraph
on the vertices v1 , . . . , vk .

See here http://www.cs.elte.hu/~lovasz/problems.pdf for open problems about graph homo-


morphisms.

Definition 3.2.8 (Some notations). Let G be the set of all finite graphs.

• Graph property : A set P ⊂ G is said to be a graph property if G1 ∈ P and G1 ∼


= G2 , then
G2 ∈ P.

• Graph invariant : φ : G → R is a graph invariant if φ(G1 ) = φ(G2 ) whenever G1 ∼


= G2 .
Equivalently φ is a graph invariant if φ−1 (r) is a graph property for every r ∈ R.

• Spanning subgraph : H is a spanning subgraph if V (H) = V (G).

• Neighbourhood : If v ∈ V , the neighbourhood of v, Nv := {w : w ∼ v}.

• Degree : dv := |Nv |.

• v is isolated if dv = 0.

• Regular graph : G is d-regular if dv = dw = d for all v, w.

• Minimum degree : δ(G) := min{dv : v ∈ V }.

• Maximum degree : ∆(G) := max{dv : v ∈ V }.

• Average degree : d(G) := |V |−1


P
v dv .

• Edge density : (G) := |E|/|V |. Show the second equality.

Prove that δ(G) ≤ d(G) ≤ ∆(G).

Exercise* 3.2.9. Which of the graphs in Section 3.1 are regular and what are their average degrees
? Can you compute Aut(G) for these examples ?

Exercise* 3.2.10. What can you say about the Minimum degree, maximum degree and average degree
of Complementary and Line graphs given those of the original graph ?
16 The very basics

P
Lemma 3.2.11. v dv is even. Thus the number of odd-degree vertices is even and d(G) = 2(G).

Exercise 3.2.12. Suppose G is a 3-regular graph on 10 vertices such that any two non-adjacent
vertices have exactly one common neighbour. Is G isomorphic to Petersen graph ?

Definition 3.2.13. (Path, Walk and Cycle) An ordered set of vertices P = v0 . . . vk is said to be a
walk from v0 to vk if vi ∼ vi+1 . A walk P = v0 . . . vk is said to be a path from v0 to vk if (vi , vi+1 )
are distinct for all i. A walk is closed if vk = v0 . A path is simple (also called as self-avoiding walk)
if vi 6= vj for all i 6= j. A walk or path is said to be closed if vk = v0 and else open. A closed path is
also called as a circuit. A circuit C = v0 . . . vk−1 v0 with no repetition of intermediate vertices is called
a cycle i.e., v0 , . . . , vk−1 are distinct.

For v 6= w, we say that v is connected to w (denoted by v → w) if there exists a path from v to


w. We shall always assume that v → v. Show that → induces an equivalence relation. Define the
component of v as Cv := {w : v → w}. If v → w, then Cv = Cw .

Exercise 3.2.14. For v 6= w, show that there exists a walk from v to w iff there exists a path from v
to w iff there exists a self-avoiding walk from v to w.

Exercise* 3.2.15. Show that → partitions V into equivalence classes and the equivalence class of v
is Cv . Show that Cv is the maximal connected subgraph containing v.

The equivalence classes induced by → are called as connected components and the number of
connected components are denoted by β0 (G).

Exercise 3.2.16. Show that β0 (G) = 1 iff for all v 6= w ∈ G, v → w.

We call the graph to be connected if β0 (G) = 1.

Exercise 3.2.17. Show that δ(G), ∆(G), d(G), β0 (G) are all graph invariants.

Exercise 3.2.18. Let G be a simple graph with at least two vertices. Show that G must contain at
least two vertices with the same degree.

3.3 Graphs as metric spaces


Let G be a connected weighted graph with strictly positive edge-weights i.e., w(e) > 0 for all e ∈ E.
or un-weighted graphs, set w ≡ 1. We now shall view graphs as metric spaces. The weight/length of
Pk−1
a path or walk P = v0 . . . vk is w(P ) := i=0 w((vi , vi+1 )). Define the distance between two vertices
v 6= w as follows :
dG (v, w) := inf{w(P ) : P is a path from v to w}.

Set dG (v, v) = 0 for all v ∈ V . Show that dG (v, w) = dG (w, v) and further dG (v, w) = 0 implies that
v = w.
Graphs and matrices : A little peek 17

Exercise* 3.3.1 (Graph metric ). Show that (V, dG ) is a metric space.

Even if G is not connected, we have that dG satisfies the three axioms of a metric space. Further,
define the diameter of a graph as

diam(G) = max{dG (v, w) : v, w ∈ G}.

Given a vertex v and r > 0, set Br (v) := {w : dG (v, w) ≤ r}, the ball of radius r at v. For
un-weighted graphs show that |Bn (v)| ≤ ni=0 ∆(G)i .
P

Exercise* 3.3.2. Let G be the Cayley graph of a free group generated by a finite symmetric set of
generators S = {s1 , −s1 , . . . , sn , −sn }. Compute |Bn (e)| for all n.

Exercise 3.3.3. Can you compute |Bn (O)| for Ld for d ≥ 2 ?

For (un-weighted) graphs, let Πn (v) be the set of self-avoiding walks of length exactly n from v. it
holds that |Πn (v)| ≤ ∆(G)(∆(G)−1)n where n ∈ N. Thus for Ld , we have that |Πn (O))| ≤ 2d(2d−1)n .

Exercise* 3.3.4. Is it true that there exists a κd such that as n → ∞, we have that |Πn (O)|1/n →
κd ∈ [0, ∞) ? Hint : Use that log |Πn (O)| is sub-additive.

3.4 Graphs and matrices : A little peek


We shall now give a brief hint about utility of matrices and linear algebra in answering graph-theoretic
questions.

Definition 3.4.1 (Adjacency matrix. ). Let G be a graph on n vertices. For simplicity, assume
V = [n]. The adjacency matrix A := A(G) := (A(i, j))1≤i,j≤n of a simple finite graph is defined as
follows : A(i, j) = 1[i ∼ j]. The definition can be appropriately extended for multi-graphs.

A is a symmetric matrix and hence has real eigenvalues.

Lemma 3.4.2. Let G be a graph on n vertices and A be its adjaceny matrix. Show that Al (i, j) is the
number of walks of length l from i to j.

Proof. By definition, Al (i, j) =


P
i1 ,...,il−1 A(i, i1 )A(i1 , i2 ) . . . A(il−1 , j) and since A is a 0 − 1 valued
matrix, we get that A(i, i1 )A(i1 , i2 ) . . . A(il−1 , j) ∈ {0, 1}. The proof is complete by noting that
A(i, i1 )A(i1 , i2 ) . . . A(il−1 , j) = 1 if i ∼ i1 ∼ i2 . . . il−1 ∼ j i.e., ii1 . . . il−1 j is a walk of length l.

Exercise* 3.4.3. Let λ1 , . . . , λn be the eigenvalues of A. Show that the number of closed walks of
length l in G is ni=1 λli . Here, we count the walk v1 , . . . , vl−1 , v1 and v2 , . . . , vl−1 , v1 , v2 as distinct
P

walks.

Exercise* 3.4.4. Count the number of closed walks of length l in the complete graph Kn .
18 The very basics

Lemma 3.4.5. Let G be a connected graph on vertex set [n]. If d(i, j) = m, then I, A, . . . , Am are
linearly independent.

Proof. Assume i 6= j. Since there is no path from i to j of length less than m, Ak (i, j) = 0 for all k < m
and Am (i, j) > 0. Thus, if I, A, . . . , Am are linearly dependent with coefficients c0 , . . . , cm , then by
the above observation and positivity of entries of Ak for all k, we have that cm = 0 i.e.,I, A, . . . , Am−1
are linearly dependent. Since d(i, j) = m, there exists j 0 such that d(i, j 0 ) = m − 1. Now applying the
above argument recursively, we get that c0 = c1 = . . . = cm = 0.

Corollary 3.4.6. Let G be a connected graph with k distinct eigenvalues. Then k > diam(G).

Proof. Let d = diam(G). Recall that the minimal polynomial of a matrix A is the monic polynomial
Q of least degree such that Q(A) = 0. By Lemma 3.4.5, we have that deg(Q) > d. The proof is
complete by observing that the number of distinct eigenvalues of A is at least deg(Q).

3.5 Euler’s theorem and König’s theorem on bi-partite graphs


Definition 3.5.1 (Eulerian graph). A circuit in a graph visiting every edge exactly once and every
vertex is called an Eulerian circuit and a graph that has an Eulerian circuit is called an Eulerian graph.

Exercise* 3.5.2. Show that a finite graph is Eulerian iff G is connected and is a edge-disjoint union
of cycles i.e., G = C1 ∪ . . . Cm where Ci ’s are cycles and have no common edges.

Theorem 3.5.3 (Euler’s theorem ; Veblen 1912.). A finite connected graph is Eulerian iff every vertex
has even degree.

Proof. Since the graph is Eulerian, let C1 , . . . , Cm be the partition into edge-disjoint cycles. Viewing
Ci ’s as graphs by themselves, observe that dv (Ci ) = 21[v ∈ Ci ] i.e., vertices in Ci have degree 2 and 0
otherwise. Further, we have by edge-disjointness of Ci ’s that
X X
dv (G) = dv (Ci ) = 2 1[v ∈ Ci ],
i i

and thus proving that every vertex has even degree.


To show the converse, assume that the vertex degrees are all even. Let x0 x1 . . . xl be a simple
path of maximal length l i.e., there is no simple path of length > l. Since dx0 ≥ 2, there exists a
y ∈ N (x0 ) \ {x1 }. Then yx0 x1 . . . xl is a simple path of length l + 1 and this yields a contradiction
unless y = xi for some 1 ≤ i ≤ l. Let xi = y. Then x0 x1 . . . xi x0 is a cycle, say C1 . Remove C1 from
G and consider G − C1 . All vertex degrees in G − C1 are even. So, we can repeat the procedure for a
component of G − C1 with at least two vertices and obtain a cycle C2 . Continuing this way, we obtain
cycles C3 , . . . Cm such that G is a disjoint union of C1 , . . . , Cm and hence G is Eulerian by the claim
above.
***Some questions*** 19

Exercise* 3.5.4 (Konigsberg problem). Prove Euler’s theorem for multi-graphs and hence show that
Konigsberg problem has no solution.

Exercise* 3.5.5. Every closed odd length walk contains an odd length cycle.

Theorem 3.5.6 (König’s theorem). A graph is bi-partite iff it has no odd cycles.

The König here is the hungarian mathematician Dénes König whose father Gyola König was also
a well-known mathematician. Dénes wrote the first book on graph theory.

Proof. The only if part : Suppose G is bi-partite with partition V = V1 t V2 . Let v0 v1 . . . , vk v0 be


a cycle. WLOG1 , suppose vo ∈ V1 . By bi-partiteness, vi ∈ V2 for 1 ≤ i ≤ k and i odd and vi ∈ V1 for
1 ≤ i ≤ k and i even. Since vk ∼ v0 , vk ∈ V2 and hence k is odd. Thus, the length of the cycle k + 1
is even.
The if part : Let v0 ∈ V . Set v ∈ V1 if dG (v0 , v) is even and else v ∈ V2 . WLOG, suppose
v ∼ w for v, w ∈ V1 . Then Let P, P 0 be respectively the shortest paths from v0 to v, w respectively.
Let P ∗ be the inversed parth from v to v0 . Since v, w ∈ V1 , P ∗, P 0 have even lengths and so P 0 wvP ∗
is a closed walk starting at v and has odd length. By Lemma(3.5.5), it has an odd length cycle, a
contradiction. Thus there are no edges between vertices in V1 or V2 respectively i.e., G is bi-partite
with partition V1 t V2 .

3.6 ***Some questions***


Question 3.6.1. Instead of counting |Bn (0)|, consider the following counting. Let fd (r) = |{(n1 , . . . , nd ) ∈
Zd : di=1 n2i ≤ r2 :}| be the number of lattice points within r radius ball in Rd . Show that fd (r)/rd →
P

a constant. To determine exact asymptotics of f2 (r) is known as the Gauss circle problem.

Here are two questions from geometric group theory. For more on this fascinating subject, refer
to [Clay and Margalit 2017].

Question 3.6.2. Suppose G is the Cayley graph of a finitely generated countable group such that
n−d |Bn (e)| → ∞ for all d ∈ (0, ∞). Is it true that n−a log |Bn (e)| → ∞ for some a > 0 ?

Question 3.6.3. Can you construct a finitely generated group such that its Cayley graph has the
following growth property for some a < 0.7 ?

0 < lim inf n−a log |Bn (e)| ≤ lim sup n−a log |Bn (e)| < ∞.
n→∞ n→∞

Now some questions on self-avoiding walks.

Question 3.6.4. Can you calculate κd for any d ≥ 2 ?


1
Without loss of generality
20 The very basics

Note : If you have solutions for any of the above four questions, please consider
p √
submitting them here. It was recently shown by Duminil-Copin and Smirnov that κ = 2 + 2
for the hexagonal lattice (see Figure 3.6). In listing problems in IMO which can lead to research
problems, Stanislav Smirnov mentions this ([Smirnov 2011]).

Figure 3.1: Hexagonal lattice


Chapter 4

Spanning Trees

Spanning trees (and minimal spanning trees) are a central object in combinatorial optimization, graph
theory and probability. See the end of the chapter for some probabilistic connections.

4.1 Trees and Cayley’s theorem


Definition 4.1.1 (Trees ). A graph with no cycles is called a forest. A connected forest is called a
tree.

Exercise* 4.1.2. Show that TFAE1 for a graph G on n vertices.

1. G is a forest.

2. G has n − β0 (G) edges.

Exercise 4.1.3. Prove that there exists two vertices of degree 1 in a tree.

Theorem 4.1.4 (Cayley’s formula ). The number of labelled trees on n vertices (i.e., spanning trees
on Kn ) is nn−2

Proof via Prufer code (H. Prufer (1918)) . We shall construct a bijection

P : S(Kn ) := {spanning trees of Kn } → [n]n−2 .

Given a tree T on [n], we generate a sequence of trees T1 , . . . , Tn−1 inductively as follows : Set T1 = T .
Given tree Ti on n − i + 1 vertices, let xi be the least labelled vertex of degree 1 and delete the edge
incident on xi to obtain Ti+1 . Denote by yi , the neighbour of xi . Observe that Tn−1 is K2 and the
process terminates at Tn when the tree has only one vertex. Now define P as

P (T ) = (y1 , . . . , yn−2 ).
1
the following are equivalent

21
22 Spanning Trees

Clearly P is a map from S(Kn ) to [n]n−2 . We shall prove that it is a bijection by constructing P −1
using induction. For n = 2, P is trivially a bijection.
Some observations : (i) (y2 , . . . , yn−2 ) is the Prufer code of T2 . (ii) The degree
di := n−2
P
j=1 1[yj = i] + 1 (Try to prove this yourself by induction on n and then see the proof below).
(iii) As a consequence we have that
xk := min{i : i ∈
/ {x1 , . . . , xk−1 , yk , . . . , yn−2 }} (again prove by induction on k using (ii)).
Suppose P is a bijection for n − 1 and let a = (a1 , . . . , an−2 ) ∈ [n]n−2 . Now, let x = min{i : i ∈
/
(a1 , . . . , an−2 )}. Consider a0 = (a2 , . . . , an−2 ) and by induction assumption a0 is the unique Prufer
code of a tree T 0 on [n] \ {x} since P is a bijection on [n] \ {x} by induction assumption. Define
T := T 0 ∪ {x, a1 }. Clearly, T is a tree as x ∈
/ V (T 0 ) and P (T ) = a. If T 00 is a tree such that P (T 00 ) = a,
then by the property (ii) of Prufer code, [n] \ {a1 , . . . , an−2 } is nothing but vertices of degree 1. By
definition of x, it has the least label among such vertices and thus by construction (x1 , y1 ) = (x, a1 ).
Hence, P (T200 ) = a0 . By uniqueness of T 0 , we have that T200 = T 0 and hence T 00 = T. So P −1 is well-
defined on [n]n−2 and P is 1 − 1 and onto. Thus P is a bijection.

Trivially (ii) holds for n = 2. Assume that it holds for all trees on n − 1 vertices. Let T be a tree
Pn−2
on [n]. By (i) and induction hypothesis, for i 6= x1 , we have that di (T2 ) = j=2 1[yj = i] + 1. Since
di (T2 ) = di (T ) for all i 6= x1 , y1 and dx1 (T ) = 1, dy1 (T ) = dy1 (T2 ) + 1, (ii) holds for T as well.

Theorem 4.1.5 (Tree counting theorem ). The number of labelled spanning trees on n vertices with
n−2
= Qn(n−2)!

degree sequence d1 , . . . , dn is d1 −1,...d n −1 (di −1)!
for n ≥ 3.
i=1

Show that Cayley’s theorem follows as a corollary of the tree counting theorem.

Proof. Denote by t(n; d1 , . . . , dn ) the number of trees with degree sequence d1 , . . . , dn . The theorem
holds for n = 3. Assume that it is true for n − 1. Assume that dj = 1.
Suppose j is joined to i in a tree T , then T − (i, j) is a tree on [n] − j with degree sequence
d1 , . . . , di − 1, . . . , dn . Thus we get that
n
X
t(n; d1 , . . . , dn ) = t(n − 1; d1 , . . . , di − 1, dˆj , . . . , dn ),
i=1,i6=j

where dˆj means dj is absent. Note that it is not possible that di = 1 and for di = 1, t(n−1; d1 , . . . , di −
n−2

1, . . . , dn ) = 0. Now check inductively that t(n; d1 , . . . , dn ) = d1 −1,...d n −1 .

Exercise* 4.1.6. Give an alternative proof of tree counting theorem using Prufer codes.

Remark 4.1.7 (More proofs of tree counting theorem). As with any such a fundamental result, there
are multiple proofs.

1. We shall see a more powerful tree counting argument (known as Kirchoff ’s matrix tree theorem)
using matrix theory later.
Trees and Cayley’s theorem 23

2. There is another proof via bijection due to Joyal and a recent double counting argument due to
Pitman.

All these five proofs can be found in [Aigner et al. 2010, Chapter 30].

Exercise* 4.1.8. Show that the following are equivalent. Let G be a graph with k components.

1. T ⊂ G is a spanning forest 2 i.e., a forest such that every component of the forest is a spanning
tree of the corresponding compoent.

2. T is a ’minimal’ spanning subgraph with k components. Here minimality of T means that T − e


has k + 1 components for any e ∈ T .

3. T is a maximal subgraph without cycles. Here maximality of T means that there is no T 0 ) T


such that T 0 has no cycles.

Exercise 4.1.9. Let H ⊂ G be a subgraph and e ∈ G \ H. Then exactly one of the following holds :
(1) β0 (H ∪ e) = β0 (H) − 1 or (2) β0 (H ∪ e) = β0 (H) and there exists a cycle C ⊂ H ∪ e such that
e ∈ C and C \ e is not a cycle in H.

Definition 4.1.10 (Cut edges and vertices ). An edge e is a cut-edge in a graph G if β0 (G − e) =


β0 (G) + 1. A vertex is a cut-vertex in a graph G if β0 (G − v) > β0 (G).

Exercise 4.1.11. An edge is a cut-edge iff it belongs to no cycle.

Lemma 4.1.12. TFAE

1. G is a forest

2. There exists a unique simple path between u to v for all u 6= v which are in the same component.

3. Every edge is a cut-edge.

Proof. Since forest has no cycles, (i) ⇒ (iii) follows from the above exercise. Since every edge is a
cut-edge, this implies that there are no cycles and hence G is a forest i.e., (iii) ⇒ (i). If there exists
two distinct simple paths from u to v for u 6= v in the same component, then the union of the paths
contains a cycle (Prove this as an exercise !). This contradicts (i) and (iii). Thus, (i) and (iii) both
imply (ii). Assuming (ii), it is easy to see that there is no cycle in G and hence both (i) and (ii)
hold.

Lemma 4.1.13 (Insertion property of Spanning Trees). Suppose T is a spanning tree of G and there
exists e ∈ G − T . Then ∃e0 ∈ T such that T + e − e0 is also a spanning tree.
2
This terminology is a slight abuse of our usual usage of the term ‘spanning’.
24 Spanning Trees

Proof. For any e0 ∈ T , T + e − e0 has same number of edges as T . Hence it suffices to show that there
exists e0 such that T + e − e0 does not have a cycle. This can be seen easily as follows : Since T is
a spanning tree, T + e contains a cycle. Now remove any edge e0 in the cycle and we can see that
T + e − e0 does not contain a cycle.

Exercise* 4.1.14 (Deletion property of Spanning Trees). Suppose T, T 0 are spanning trees of G and
e ∈ T − T 0 . Then ∃e0 ∈ T 0 such that T − e + e0 is also a spanning tree.

4.2 Minimal spanning trees


Definition 4.2.1 (Minimal Spanning Tree ). Consider a weighted graph G, w. Given a subgraph
P
H ⊂ G, we define w(H) := e∈H w(e). M ST is said to be a minimal spanning tree if w(M ST ) =
min{w(T ) : T is a spanning tree}.

Of course, minimal spanning tree exists if G is a connected graph. A set of edges S ⊂ E is said to
be a cut3 if β0 (G − S) = β0 (G) + 1 and for any S 0 ( S, β0 (G − S 0 ) = β0 (G).

Proposition 4.2.2 (Some properties of MST). Let G be a connected graph with edge-weights.

1. Uniqueness : If w : E → R is an injective function, then M ST is unique.

2. Cut property : If M is a MST and C is a cut in G, then one of the minimal weight edges in C
should be in the M .

3. Cycle property : If M is a MST and C is a cycle in G, then one of the maximal weight edges
in C will not be in M .

Proof. (i) : Let T1 , T2 be MSTs such that T1 6= T2 . Since T1 , T2 have the same vertex set, there
exists an edge in T1 ∆T2 . Choose the edge e1 with the least weight and WLOG let e1 ∈ T1 . Since
T2 is a spanning tree, T2 + e1 has a cycle C. Since C ( T1 , there exists e2 ∈ C − T1 and also
e2 ∈ T1 ∆T2 . Thus w(e2 ) > w(e1 ) as e1 has the least weight in T1 ∆T2 . As in the proof of insertion
property for spanning trees (Lemma 4.1.13), we can show that T2 + e1 − e2 is a tree. Thus, we have
that w(T2 + e1 − e2 ) < w(T2 ) and hence contradicting the minimality of T2 .

(ii) : Let C = {e1 , . . . , ek } in non-decreasing order of weights. If e1 := (u, v) ∈


/ M , then M ∪ e1
has a cycle. Since there exists a path in M from u to v and C is a cut, this path must pass through
ei for some i > 1. This gives that M − ei + e1 is a spanning tree(since it has no cycles and has n − 1
edges). But since M is a MST, w(ei ) ≤ w(e1 ). By choice of e1 , w(e1 ) ≤ w(ei ) and so w(ei ) = w(e1 ).
Thus ei is a minimal weight edge and ei ∈ M as required.

(iii) : One can argue as above for this case too.

Exercise* 4.2.3. Prove (iii) in the above proposition.


3
Sometimes this is called a minimal cut and a cut is S ⊂ E if β0 (G − S) > β0 (G).
Kruskal and other algorithms 25

4.3 Kruskal and other algorithms


Kruskal’s algorithm : Input graph weighted connected Graph G = (V, E, w).

Step 1: Initialize with D = ∅ ⊂ E and M = (V (G), ∅).

Step 2: Select one of the smallest (in terms of weight) edges in E − D. Call it e.

Step 3: If M ∪ e does not create a cycle, set M = M ∪ e.

Step 4: Set D = D ∪ e. If |E(M )| < n − 1 and D 6= E, go to Step 2 else to Step 5.

Step 5: Output M .

Theorem 4.3.1. The output of the Kruskal’s algorithm is a minimal spanning tree if G is connected.

Proof. The proof consists of two steps - Firtly, we will show that the output M is a spanning tree and
then show it is of minimum weight.

Step 1 : M is a spanning tree. Clearly M does not contain a cycle. If M has at least two
components C1 , C2 , then let e an edge with minimal weight between C1 and C2 . e exists as G is con-
nected. e would have been added in Step 2 when it was considered. Thus, we derive a contradiction
to the fact that M is disconnected. Hence M is connected and acyclic, i.e., a spanning tree.

Step 2 : M has minimum weight. We shall show that the following claim holds by induction
: At every stage of the algorithm, there is a minimal spanning tree T such that M ⊂ T .
The above claim suffices because at the last step, the output M is the M at the end of Step 3 and if
it is a subgraph of a spanning tree T , then it must be that M = T as M is a spanning tree from Step 1.

Induction argument : The claim holds in the beginning because M = ∅. Assume that the claim
holds at some stage of the algorithm for a M and a T .
Now, suppose that the next chosen edge creates a cycle, then M does not change and the claim
still holds.
So, let the next chosen edge not create a cycle and hence M becomes M + e. If e ∈ T , we are
done. Suppose e ∈
/ T . Then T + e has a cycle C. This cycle contains edges which do not belong to M ,
since e does not form a cycle in M but does in T . Note that, there exists f ∈ C − M which has not
been considered by the algorithms before and hence it must have weight at least as large as e. If f
had been considered before e in the algorithm, M − e + f will not contain a cycle (as M − e + f ⊂ T )
and so f would have been added to M already.
Then T − f + e is a tree by the insertion property and it has the same or less weight as T . If
w(f ) = w(e), then T − f + e is a minimum spanning tree containing M + e and again the claim holds.
If w(f ) > w(e), then T − f + e is a spanning tree of strictly smaller weight than T and leads to a
contradiction.
26 Spanning Trees

Prim-Dijkstra-Jarnik’s algorithm : Input graph weighted connected Graph G = (V, E, w).

Step 1: Initialize with M = D = ∅ and S = {v}, T = V − S for some v ∈ V .

Step 2: Select one of the smallest (in terms of weight) edges in E ∩ (S × T ). Call it e and M = M ∪ e.

Step 3: Say e = (v1 , v2 ) ∈ S × T then set S = S ∪ {v2 } and T = V − S.

Step 4: If T 6= ∅, then go to Step 2 else to Step 5.

Step 5: Output M .

Theorem 4.3.2. The output of Prim-Dijkstra-Jarnik’s algorithm is a minimal spanning tree if G is


connected.

Proof. Denote the output by M̃ . The proof of the fact that M̃ is a spanning tree is left as an exercise
as the steps were sketched in the class. W e shall only show minimality here.
Claim : As with the proof of Kruskal’s algorithm, we shall show that at every step M ⊂ M ∗ with
the latter being a MST.
The claim is trivially true at the initial step when M = ∅ and since MST exists. Suppose the claim
is true upto some step with M, T, S already determined i.e., M ⊂ M ∗ , a MST.
/ M ∗ . Then,
Let e = (u, v) be the edge chosen by the Prim’s algorithm at this stage and suppose e ∈
there exists a path P from u to v in M ∗ . This implies that there exists an edge e0 ∈ P ∩ (T × S). But
since Prim’s algorithm selected e instead of e0 , we have that w(e) ≤ w(e0 ). But since P + e is a cycle in
M ∗ + e, we can remove any edge from the cycle and still get a spanning tree. Thus, M ∗ + e − e0 is also
a spanning tree and since M ∗ is a MST, we have that w(e) ≥ w(e0 ). Thus, we get that w(e) = w(e0 )
and M + e ⊂ M ∗ + e − e0 , a MST. This proves the claim and completes the proof.

Exercise* 4.3.3. The multi-set of weights for a minimal spanning tree is unique i.e., for any s ∈ R
and any two MSTs M, M 0 , we have that |{e ∈ M : w(e) = s}| = |{e ∈ M 0 : w(e) = s}|.

Exercise 4.3.4 (Minimal Spanning Forests). Define and suitably extend these notions to minimal
spanning forests.

Exercise* 4.3.5. Given a connected graph (G, E) with non-negative edge weights w satisfying the
following conditions : w(u, v) = ∞ if (u, v) ∈
/ E and w(u, v) > 0 if u ∼ v. Fixing a starting vertex x,
show that the following algorithm computes the shortest paths from x to all other vertices in G.

Step 1 : Let S = {x}, t(x) = 0, t(u) = w(u, x) for u ∈


/ S.

Step 2 : Select a u ∈ / t(z). Change S to S ∪ {u} and update t(z) for z ∈


/ S such that t(u) = minz ∈S / S to
min{t(z), t(u) + w(u, z)}.

Step 3 : Continue Step 2 until S = V (G) and set d(u, x) = t(u) for all u.

Show that d(u, v) = dG (u, v).


***Some questions : Random spanning trees*** 27

4.4 ***Some questions : Random spanning trees***


Let us consider the following weighted graph : Kn , the complete graph is our graph and the edge
weights {w(e)}e∈E(Kn ) are i.i.d. U [0, 1] random variables. Let Mn denote the minimal spanning tree
(why is it unique ?). Alan Frieze ([Frieze 1985]) showed that w(Mn ) → ζ(3) = ∞ −3 = 1.202 . . .
P
k=1 k
where ζ is the famed Riemann-zeta function. See [Addario-Berry 2015] for a newer proof.
We will now consider another weighted graph : Let V1 , . . . , Vn denote i.i.d. uniform points in [0, 1]d .
Let Gn be the complete graph on {V1 , . . . , Vn } with edge weights being the euclidean distance between
the points i.e., w(Vi , Vj ) = |Vi −Vj | for i 6= j. Let Mn denote the minimal spanning tree (why is it unique
?). It is known that n−(d−1)/d w(Mn ) → Cd for some Cd ∈ (0, ∞). See the wonderful monograph of
[Steele 1997] for a proof of this and various other related results in probabilistic combinatorial optimiza-
tion such as stochastic travelling salesman problem, minimal matchings etc. Determining more infor-
mation about the constant Cd is still open. See [Bertsimas 1990, Rhee 1992, Frieze and Pegden 2017]
for some progress in this direction.
See the following talk by Lougi Addario-Berry for more on the thriving research on probabilistic
aspects of minimal spanning trees - http://problab.ca/louigi/talks/Msts.pdf.
28 Spanning Trees
Chapter 5

Extremal Graph Theory

The extremal refers to the fact that one is concerned with extremal (maximal or minimal) questions
about graphs. A prototypical question is the maximal or minimal number of edges in a graph with
a certain property. We have already answered such a question about maximal number of edges for
a graph to be a tree. Turan’s theorem, which we shall shortly, is considered one of the foundational
results in this subject. It is in answering a question in extremal graph theory, Erdös made powerful use
of probabilistic ideas and this gave birth to what is now famously known as the probabilistic method.
We shall see an illustration of this in the second section.

5.1 Existence of complete subgraphs and Hamiltonian circuits

Lemma 5.1.1. Every graph has a self-avoiding walk of length δ(G).

Proof. WLOG assume δ(G) ≥ 1. Let v0 be a vertex in G. Given vi , i < δ(G), we can choose a
neighbour vi+1 of vi such that vi+1 6= v0 , . . . , vi−1 as vi has at least δ(G) neighbours. Hence we get a
self-avoiding walk v0 v1 . . . vδ(G) as needed.

Define girth of a graph g(G) := min{l(C) : C, a cycle}. Set g(G) = ∞ if there exists no cycle. The
diameter of a graph if diam(G) := max{d(u, v) : u, v ∈ V } and

Lemma 5.1.2. If G has a cycle, g(G) ≤ 2diam(G) + 1.

Proof. Assume otherwise. Let C = v0 . . . vk v0 be the cycle of minimal length. Suppose k is even i.e.,
l(C) = k + 1 is odd. Then d(v0 , vk/2 ) = k/2. But by definition of diameter, k/2 ≤ diam(G) and so
l(C) = k + 1 ≤ 2diam(G) + 1. If k is odd then d(v0 , v(k+1)/2 ) = k + 1/2 and again we have that
l(C) = k + 1 ≤ 2diam(G)

Lemma 5.1.3 (Mantel’s theorem, 1907). If G is a graph on n vertices with no triangle then |E| ≤
bn2 /4c. Equivalently, if |E| > n2 /4, then g(G) = 3.

29
30 Extremal Graph Theory

P
Proof. Let G have [n] has a vertex set and no triangles. Let zi be weight of i such that i zi = 1 and we
P P P
shall try to maximize S = i∼j zi zj . Let k  l. Assume that j∼k zj zk = xzk and j∼l zj zl = yzl .
WLOG assume x ≥ y. Observe that zk x + zl y ≤ (zk + )x + (zl − )y. Thus if (z1 , . . . , zn ) is a
configuration of weights, then (z1 , . . . , zk + zl , . . . , 0, . . . , zn ) is a configuration with larger S i.e., we
have transferred weight from zl to zk . If we repeat the procedure again, we see that it stops when
the weights are concentrated on two adjacent vertices. This is because whenever the weights are
concentrated on at least three vertices, there are two vertices which are not neighbours as the induced
subgraph is not complete. If the two adjacent vertices are i, j then S = zi zj ≤ 1/4.
Let zi = n−1 for all i ∈ [n]. Then the corresponding S = n−2 |E| and this is at most 1/4 by the
above argument. Hence the theorem is proved.

Theorem 5.1.4 (Turan, 1941). If a simple graph on n vertices has no complete subgraph Kp , then
(p−2)n2 −r(p−1−r)
|E| ≤ M (n, p) := 2(p−1) where r ≡ n( mod p − 1).

The above bound can be achieved as follows : Let S1 , . . . , Sp−1 be an almost equal partition of V
i.e., S1 , . . . , Sr are subsets of sizze t + 1 and the rest are of size t for some t ≥ 0 and r ≥ 1. Construct
the complete multi-partite graph on S1 , . . . , Sp−1 such that all the edges between Si and Sj are present
for i 6= j and these are the only edges. This does not have a complete subgraph Kp and the number
of edges is M (n, p).

Proof. Let t be such that n = t(p − 1) + r. We will prove by induction on t. If t = 0, then


n = r, M (n, p) = n(n − 1)/2 and the theorem trivially holds as n ≤ p − 1. Now, consider a graph G
on n vertices with no Kp subgraph (i.e., a subgraph isomorphic to Kp ) and let G have the maximum
number of edges subject to these constraints. Hence, G contains a subgraph H isomorphic to Kp−1 .
If not, one can add an edge to G without creating a Kp subgraph and so contradicting its maximality.
The vertices V −H are joined to at most p−2 vertices in H. Since |V −H| = n−p+1 = (t−1)(p−1)+r
and the induced subgraph < V − H > also does not contain a Kp subgraph, by induction hypothesis,
|E(< V − H >)| ≤ M (n − p + 1, p). Thus, we have that
 
p−1
|E(G)| ≤ M (n − p + 1, p) + (n − p + 1)(p − 2) +
2

and one can easily verify that the RHS is equal to M (n, p).

See [Aigner et al. 2010, Chapter 36] for more proofs of Turan’s theorem.

Exercise* 5.1.5. If you generalize the argument for Mantel’s theorem given in the class, what is the
bound you get in Turan’s theorem ?

1

Theorem 5.1.6. If a graph G on n vertices has more than 2n n − 1 edges, then G has girth ≤ 4.
That is G contains a triangle or quadrilateral.
Existence of complete subgraphs and Hamiltonian circuits 31

Proof. Suppose g(G) ≥ 5. Let v1 , . . . , vd be the neighbours of a vertex v. Since there are no triangles,
vj ∈/ Nvi for i 6= j and since there are no quadrilaterals, Nvi ∩ Nvj = {v} for i 6= j. Thus, we have that
∪i=1 Nvi \ {v} ∪ Nv ∪ {v} ⊂ [n] and so di=1 (dvi − 1) + d + 1 ≤ n and hence w∼v dw ≤ n − 1. Thus,
d
P P

we et
XX X X
n(n − 1) ≥ dw = d2v ≥ n−1 ( dv )2 = n−1 4|E|2 .
v w∼v v v

Theorem 5.1.7. If G is a graph on n vertices with n ≥ 3 and δ(G) ≥ n/2, then G contains an
Hamilton cycle.

Recall that a Hamilton circuit is a simple closed path passing through every vertex exactly once.

Proof. Suppose G is a counterexample to the theorem and G be such a graph with maximal number
of edges i.e., addition of an edge to G creates a cycle. Let v  w and hence G ∪ (v, w) will contain
a Hamilton cycle v = v1 v2 . . . vn = w, v. Thus v1 v2 . . . vn is a simple path. Define sets Sv := {i : v ∼
vi+1 } and Sw := {i : w ∼ vi }. Since δ(G) ≥ n/2, |Sv |, |Sw | ≥ n/2 and further Sv , Sw ⊂ {1, . . . , n − 1}.
Hence Sv ∩ Sw 6= ∅ and assume that i0 ∈ Sv ∩ Sw . Then v = v1 v2 . . . vi0 w = vn vn−1 . . . vi0 +1 v1 = v is
a Hamiltonian circuit in G, contradicting our assumption.

Where does the argument given in the class fail for n = 2 ?

Exercise* 5.1.8. 1. If δ(G) ≥ 2, every graph has a cycle of length at least δ(G) + 1.

2. Prove that a graph G with g(G) = 4 and δ(G) ≥ k has at least 2k vertices. Is 2k the best lower
bound ?

3. Let G be a graph with g(G) = 5 and δ(G) ≥ k. Show that G has at least k 2 + 1 vertices.

4. Prove that a k-regular bipartite graph has no cut-edge for k ≥ 2.

5. Show that Petersen graph (defined in Section 3.1) is the largest 3-regular graph with diameter 2.

6. Let G be a simple graph on n vertices (n > 3) with no vertex of degree n − 1. Suppose that
for any two vertices of G, there is a unique vertex joined to both of them. If x and y are not
adjacent show that d(x) = d(y). Now, show that G is a regular graph.

7. Show that a simple graph on n vertices with bn2 /4c edges and no triangles, then it is the complete
bipartite graph Kk,k if n = 2k or Kk,k+1 if n = 2k + 1.
e
8. If a simple graph on n vertices has e edges, then it has at least 3n (4e − n2 ) triangles.

9. Suppose G is a graph with at least one edge. Then there exists a subgraph H such that δ(H) >
(H) ≥ (G) where recall that (G) was defined as the edge density of the graph (see Definition
3.2.8).
32 Extremal Graph Theory

5.2 ***Probabilistic Method : An introduction***


This section is not part of syllabus but rather an introduction to one of the most powerful tools in
modern day graph theory and combinatorics.

Exercise 5.2.1. In any graph G on 6 vertices, either K3 ⊂ G or K3 ⊂ Ḡ.

Given m, n we define the Ramsey numbers R(m, n) as follows :

R(m, n) := inf{t : For any G ⊂ Kt , Km ⊂ G or Kn ⊂ Gc }.

Note that R(m, n) = R(n, m), R(m, 2) = m. Our previous lemma gives that R(3, 3) ≤ 6. Ram-
sey(1929) showed that R(m, n) < ∞ for all m, n. Using probabilistic method, Erdös showed that
R(k, k) > b2k/2 c. We will present the proof now.
Let n = b2k/2 c and we wish to show that there is a coloring of edges of Kn with blue and red
such that there is no monochromatic clique of size k. The set of all possible colorings on Kn is
n
Ωn := E(Kn ){R,B} i.e., a set of cardinality 2( 2 ) . The crucial idea in probabilistic method is to pick a
random variable X with values in Ωn such that the probability X is a coloring with no monochromatic
k-clique is positive and this trivially implies that the set of colorings with no monochromatic k-clique
is non-empty as desired. As it is common, we can view f ∈ Ωn as a function f : E(Kn ) → {R, B}.
Let An := {f ∈ Ωn : f has no monochromatic k-clique}. Instead of showing P (X ∈ An ) > 0,
we will show that P (X ∈ Acn ) < 1. We shall shortly see why the required upper bounding is easier.
Firstly, note that if f ∈
/ An , then there exists a subset S ⊂ [n] and |S| = k such that fES ≡ R or
fES ≡ B where we have used ES to abbreviate the edge set of < S >, the complete subgraph on S.
Hence, we have that
[
Acn ⊂ ({f : fES ≡ R} ∪ {f : fES ≡ B}).
S⊂[n],|S|=k

By the union bound for probability distribution, we have that


X
P (X ∈
/ An ) ≤ (P (XES ≡ R) + P (XES ≡ B)).
S⊂[n],|S|=k

The union bound is what makes it easier to upper bound than lower bound. Also, so far we have
not used any specific property of the random variable at all. We can now make a ‘clever’ choice of
the random variable to obtain the desired upper bound. This flexibility in making ‘clever’ choices of
random variables to obtain desired bounds is another hallmark of the probabilistic method and lends
enormous power to the method.
Now choose X ∈ Ωn as follows : X(e), e ∈ E(Kn ) are i.i.d. random variables with probability such
n
that P (X(e) = R) = 1/2 = P (X(e) − B). Equivalently, P (X = f ) = 2−( 2 ) for all f ∈ Ω i.e., X is
n
uniformly distributed in Ωn . Now trivially, we have that

k
P (XES ≡ R) = P (XES ≡ B) = 2−(2)
***Some other methods*** 33

and thus  
n 1−(k)
P (X ∈
/ An ) ≤ 2 2 .
k
k
n
21−(2) < 1 as desired.

I will leave it as an exercise to verify that for the choice of n we have made k
This proves R(k, k) > n := b2k/2 c.
The defect of probabilistic method as obvious in the above proof is that it is non-constructive i.e.,
it did not give explicitly a coloring that has a monochromatic k-clique in Kn .
Read [Alon and Spencer 2004, Chapter 1] and [Aigner et al. 2010, Chapter 40] for more illustrative
examples on the probabilistic method and if interested, read more of [Alon and Spencer 2004].
Regardless of your liking of the proof via probabilistic method, Ramsey numbers raise many in-
teresting questions themselves. For example, R(5, 5) is still unknown. Here is a legendary statement
from Erdös attesting to the complexity of Ramsey numbers (see [Spencer 1994, Page 4]) :

” Erdös asks us to imagine an alien force, vastly more powerful than us, landing on Earth and
demanding the value of R(5, 5) or they will destroy our planet. In that case, he claims, we should
marshal all our computers and all our mathematicians and attempt to find the value. But suppose,
instead, that they ask for R(6, 6). In that case, he believes, we should attempt to destroy the aliens. ”

For more on Ramsey numbers, read https://en.wikipedia.org/wiki/Ramsey’s_theorem.

5.3 ***Some other methods***


Extremal graph theory and combinatorics is fertile ground with interesting ideas from different fields
of mathematics. We have indicated the probabilistic method above and we mention two notabl
ones : Linear algebra method ([Babai 1992]) and the very recent Polynomial method ([Guth 2016,
Grochow 2019]). An excellent overview of all these three methods can be found in [Jukna 2011].
34 Extremal Graph Theory
Chapter 6

Matchings, covers and factors.

6.1 Hall’s marriage theorem, Koenig’s, Gallai’s and Berge’s theo-


rems
Definition 6.1.1. A subset M of edges is said to be independent / matching if no two edges are
incident on any vertex or equivalently, every vertex is contained in at most one edge. A complete
matching M on a subset S ⊂ V is a matching that contains all the vertices in S. A perfect matching
is a complete matching on G.

Alternatively one can consider a matching of a graph M as a subgraph of G such that dM (v) = 1
for all v ∈ V (M ). A matching is complete if M is spanning. A vertex v is said to be saturated if
v ∈ M and else unsaturated. For a subset S ⊂ V , N (S) = ∪v∈S N (v).

Theorem 6.1.2 (Hall’s marriage theorem ; Hall, 1935). Let G be a bi-partite graph with the two
vertex sets being V1 , V2 . Then there exists a complete matching on V1 iff |N (S)| ≥ |S| for all S ⊂ V1 .

We will give a proof via induction now and a later exercise will involve proof using max-flow
min-cut theorem. See [Diestel 2000, Section 2.1] for two more proofs.

Proof. Let |V1 | = k and our proof will be by induction on k. If k = 1, the proof is trivial.
Let G = V1 ∪ V2 be such that the result holds for any graph with strictly smaller V1 .
Suppose that |N (S)| ≥ |S| + 1 for all S ( V1 . Then choose (v, w) ∈ E ∩ V1 × V2 and consider
the induced subgraph G0 :=< V − {v, w} >. Since we have removed only w from V2 and that
|N (S)| ≥ |S| + 1 for all S ( V1 , we get that |N (S 0 )| ≥ |S 0 | for all S 0 ⊂ V1 − {v}. Thus there is a
complete matching M on V1 −{v} in G0 by induction hypothesis and M ∪(v, w) is a complete matching
on V1 in G as desired.
If the above is not true i.e., there exists A ⊂ V1 such that N (A) = B and |A| = |B|. Then, by
induction hypothesis, there is a complete matching M0 on A in the induced subgraph < A ∪ B >.
Trivially, Hall’s condition holds i.e., for all S ⊂ A, |N (S)∩B| = |N (S)| ≥ |S|. Let G0 := G− < A∪B >.
Let S ⊂ V1 − A. Suppose if |N 0 (S)| < |S| where N 0 (S) = N (S) ∩ (V2 − B). Then, we have that

35
36 Matchings, covers and factors.

N (S ∪ A) = N 0 (S) ∪ B and hence |N (S ∪ A)| ≤ |N 0 (S)| + |B| < |S| + |A|, a contradiction. Hence, G0
also satisfies Hall’s condition and again by induction hypothesis G0 has a complete matching M 0 on
V1 − A. Thus, we have a complete matching M := M0 ∪ M 0 on V1 in G.

Exercise 6.1.3. For k > 0, a k-regular bipartite graph has a perfect matching.

Proposition 6.1.4. Let d ≥ 1. Let G be a bipartite graph on V1 t V2 such that |N (S)| ≥ |S| − d for
all S ⊂ V1 . Then G has a matching with at least |V1 | − d independent edges.

Proof. Set V20 := V2 ∪ [d]. Define G0 with vertex set as V1 t V20 and edge set as E(G) ∪ (V1 × [d]). Then,
it is easy to see that Hall’s condition is true on G0 and hence there is a complete matching M of V1 in
G0 . Now, if we remove the edes in M incident on [d], we get a matching with at least |V1 | − d edges
as required.

Exercise* 6.1.5. Let G be a bi-partite graph with partitions V1 = {x1 , . . . , xm } and V2 = {y1 , . . . , yn }.
Then G has a subgraph H such that dH (xi ) = di , dH (yj ) ≤ 1, ∀1 ≤ i ≤ m and ∀1 ≤ j ≤ n iff for all
P
S ⊂ V1 , we have that |N (S)| ≥ xi ∈S di .

Definition 6.1.6 (Independent sets and covers). An independent set of vertices is S ⊂ V such that
no two vertices in S are adjacent. A subset of vertices S ⊂ V is a vertex cover if every edge in G is
incident to at least one vertex in S. An edge cover is a set of edges E 0 ⊂ E such that every vertex is
contained in at least one edge in E 0 .

For the relevance of independent sets to information theory/communication, see Section 6.4.

Definition 6.1.7 (Independence number and cover number).

α(G) = max{|S| : S independent vertex set}.


α0 (G) = max{|M | : M independent edge set}.
β(G) = min{|S| : S vertex cover}.
β 0 (G) = min{|E 0 | : E 0 edge cover}.

Exercise 6.1.8. Prove that a graph G is bipartite if and only if every subgraph H of G has an
independent set consisting of at least half of V(H).

We first derive some trivial relations between the four quantities. If M is a maximal matching,
then to cover each edge we need distinct vertices and hence the vertex cover should have size at least
M . This yields the first inequality below.

α0 (G) ≤ β(G) ≤ 2α0 (G) ; α(G) ≤ β 0 (G).

As for the second inequality, observe that to cover vertices of an independent set, we need distinct
edges.
Hall’s marriage theorem, Koenig’s, Gallai’s and Berge’s theorems 37

Lemma 6.1.9. Let G be a graph. S ⊂ V is an independent set iff S c is a vertex cover. As a corollary,
we get α(G) + β(G) = n = |V |.

The above lemma follows trivially from the definitions.

Theorem 6.1.10 (Konig, Egervary, 1931). For a bi-partite graph, α0 (G) = β(G).

Remark 6.1.11 (Equivalent Formulation :). A bi-partite graph is equivalent to a 0−1 matrix. Denote
the rows by V1 and columns by V2 . If (v1 , v2 )-entry is a 1 then there is an edge v1 ∼ v2 . It is easy to
see that given a bi-partite matrix it can be represented as a 0 − 1 matrix with rows indexed by V1 and
columns indexed by V2 .
By a line, we shall mean either a row or a column. Under the matrix formulation, vertex cover is
a set of lines that include all the 1’s. An independent set of edges is a collections of 1’s such that no
two 1’s are on the same line.

Proof. We will show that for a minimal vertex cover Q, there exists a matching of size at least |Q|.
Partition Q into A := Q ∩ V1 and B := Q ∩ V2 . Let H and H 0 be induced subgraphs on A t (V2 − B)
and (V1 − A) t B respectively. If we show that there is a complete matching on A in H and a complete
matching on B in H 0 , we have a matching of size at least |A| + |B| (= |Q|) in G. Also, note that it
suffices to show that there is a complete matching on A in H because we can reverse the roles of A
and B apply the same argument to B as well.
Since A ∪ B is a vertex cover, there cannot be an edge between V1 − A and V2 − B. Suppose for
some S ⊂ A, we have that |NH (S)| < |S|. Since NH (S) covers all edges from S that are not incident
on B, Q0 := Q − S + NH (S) is also a vertex cover. By choice of S, Q0 is a smaller vertex cover than Q
contradiciting the minimality of Q. Hence, we have that Hall’s condition holds true for A in H. And
by the arguments in the previous paragraph, the proof is complete.

Exercise 6.1.12. If a graph G does not contain a path of length more than 2, show that it’s connected
components are all star graphs.

Theorem 6.1.13 (Gallai, 1959). If G is a graph without isolated vertices, then α0 (G) + β 0 (G) = n =
|V |.

Proof. Suppose M is a maximal matching. Then S = V − V (M ) is also an independent set. If


there are edges between vertices of S, then such edges can be added to M and one can obtain a larger
matching. Hence there are no edges between vertices of S and hence it is a independent set. Construct
a edge cover as follows : Add all edges in M to Q and for each v ∈ S, add one of its adjacent edges
to Q. Thus |Q| = |M | + |S| and since V (M ) t S = V , we can derive that

α0 (G) + β 0 (G) ≤ |M | + |Q| = 2|M | + |S| = n.

Let Q be a minimal edge cover. Then Q cannot contain a path of length more than 2. Else, by
removing the middle edge in a path of length at least 3, we can obtain a smaller edge cover. By the
38 Matchings, covers and factors.

previous exercise, Q is a graph consisting of star components. If C1 , . . . , Ck are the components of Q,


then V (C1 ) ∪ . . . ∪ V (Ck ) = V and E(C1 ) ∪ . . . ∪ E(Ck ) = Q. Now choose a matching M = {e1 , . . . , ek }
by selecting one edge from every component C1 , . . . , Ck . Since Ci ’s are disjoint, M is a matching.
Thus, using the fact that each Q is a forest with k components, we can derive that

k
X k
X
α0 (G) + β 0 (G) ≥ |M | + |Q| = k + |E(Ci )| = |V (Ci )| = n.
i=1 i=1

As a corollary, we get König’s result : if G is bi-partite graph without isolated vertices, α(G) =
β 0 (G).

Definition 6.1.14 (Augmenting path ). Given a matching M , a M -alternating path P is a path such
that its edges alternate between M and M c . A M -augmenting path is a M -alternating path whose
end-vertices do not belong to M .

Theorem 6.1.15 (Berge, 1957 ). A matching M in a graph is a maximum matching in G iff G has
no M -augmenting path.

Exercise* 6.1.16. If M, M 0 are two matchings, every component of M 4M 0 is a path or an even


cycle.

Proof. Suppose there is an M -augmenting path P . Let P = v0 v1 . . . vk . Since P is M -augmenting,


(v0 , v1 ), (v2 , v3 ), . . . , (vk−1 , vk ) ∈
/ M and (v1 , v2 ), (v3 , v4 ), . . . , (vk−2 , vk−1 ) ∈ M . Now, observe that
M 0 = M − P ∪ {(v0 , v1 ), (v2 , v3 ), . . . , (vk−1 , vk )} is a larger matching than M . Hence if M is a
maximum matching, there is no M -augmenting path.
Suppose M 0 is a larger matching than M . We shall construct an M -augmenting path and prove the
theorem by contraposition. Let F = M 4M 0 . We know by the above exercise that the components
of F are paths or even cycles. Since |M 0 | > |M |, there must be a component of F such that M 0
has more edges in that component than M 0 . If a component in F is an even cycle, it consists of
same number of edges from M and M 0 . Thus, the component for which M 0 has more edges must
be a path, say P = v0 . . . , vk . Since P ⊂ F , we have that P has to be an M -alternating path i.e.,
(v0 , v1 ) ∈ M 0 , (v1 , v2 ) ∈ M, . . . or (v0 , v1 ) ∈ M, (v1 , v2 ) ∈ M 0 , . . .. Since m0 := |M 0 ∩ P | > |M ∩ P | = m
and that P is an M -alternating path, we derive that m0 − m = 1 and k = 2m + 1. Further, this
implies that (v0 , v1 ), (v2 , v3 ), . . . , (vk−1 , vk ) inM 0 and (v1 , v2 ), (v3 , v4 ), . . . , (vk−2 , vk−1 ) ∈ M i.e., P is
an M -augmenting path.

6.2 Graph factors and Tutte’s theorem


Definition 6.2.1 (Factor of a graph ). Given a graph G, a subgraph H is said to be a factor if
V (H) = V (G) i.e., spanning. An r-factor is a factor that is r-regular.
Graph factors and Tutte’s theorem 39

Thus, 1-factors are nothing but perfect matchings.

Theorem 6.2.2 (Petersen, 1891). Every regular graph of positive even degree has a 2-factor.

Proof. Let G be any 2k-regular graph for k ≥ 1. WLOG, let G be connected. Let v1 v2 . . . vn v0 be the
Eulerian circuit in G. Now, we replace each vertex v by two vertices v − , v + and the edge vi vi+1 by
vi− vi+1
+
. Thus we get a new k-regular graph G0 . By Exercise 6.1.3, there is a 1-factor in G0 . Now, by
merging the vertices v − , v + in G0 , we get a 2-factor in G.

For a graph G, let o(G) denote the number of odd components of G.

Theorem 6.2.3 (Tutte , 1947 ). A graph has a 1-factor iff o(G − S) ≤ |S| for all S ⊂ V .

Proof. (Lovasz, 1975) In a 1-factor, the odd components will have at least one vertex each to be
matched to vertices in S and these vertices in S have to be distinct. Thus, o(G − S) ≤ |S| for all
S ⊂ V if G has a 1-factor.
Now let G be a graph without 1-factor. We shall show that there is a bad set i.e., a set S violating
Tutte’s condition - o(G − S) ≤ |S| for all S ⊂ V . If G0 = G + e has no 1-factor, then so does G.
Further if G0 has a bad set S, then so does G. Hence, we shall assume that G is an edge-maximal
graph without 1-factor and we shall find a bad set in G.
Heuristic : If S is a bad set for G, then all components of G − S have to be complete. If a
component, say C, is not complete, we can consider G ∪ e where e is a missing edge in C. Since
o(G ∪ e − S) = o(G − S) > |S|, by forward implication of the theorem we have that G ∪ e is not a
1-factor i.e., violating edge-maximality of G. Further, by the same reasoning, s ∈ S must be connected
to all vertices in G − S. We will now show that these two conditions characterize bad sets and then
produce such a set.
Let us say a set S ⊂ V satisfies condition B if all components of G − S are complete and all s ∈ S
are connected to all vertices in G − S.
Claim : If S satisfies condition B, then either S is bad or ∅ is bad.
If S is not bad, then we can join one vertex from each of the odd components to S disjointly and
then try to pair up the remaining vertices. In every even component of G − S, we can pair up the
vertices and in every odd component too, we can pair up the vertices not paired to S. We are only
left with remaining vertices S 0 in S where |S 0 | = |S| − o(G − S) and S 0 forms a complete subgraph.
But since G does not contain a 1-factor, |S| − o(G − S) is odd. Since, there is a complete matching
on V − S 0 , |V − S 0 | is even and so |V | is odd i.e., ∅ is a bad set.
Now, to show that G has a set S satisfying condition B, let S be the set of vertices such that s ∈ S
is adjacent to every other vertex. If S does not satisfy condition B, then some component of G − S
isn’t complete. Let v  w in a component and let v, v1 , v2 be the first three vertices on a shortest
path from v to w. The v ∼ v1 , v1  v2 but v  v2 . Since v1 ∈
/ S, there is a u such that u  v1 . By
maximality of G, there is a 1-factor H1 in G + (vv2 ) and H2 in G + (v1 u).
Let P = u . . . u0 be a maximal path in G starting at u with an edge from H1 and alternating
between edges in H1 and H2 . If last edge of P is in H1 , by maximality of P this implies that there is
40 Matchings, covers and factors.

no edge of H2 incident on u0 in G. This implies that u0 = b as every other vertex has an edge of H2
incident on it in G. We will then set C = P + v1 u. Note that C is a cycle and it is of even length as
P starts in H1 and ends in H1 . If the last edge of P is in H2 , again by maximality of P this implies
that there is no edge of H1 incident on u0 in G and as earlier, this means u0 ∈ {v, v2 }. Then consider
C = P + (v 0 , v1 ) + (v1 , u). Again, C is an even cycle as P starts in H1 and ends in H2 . In either case
C is an even cycle and (v1 , u) ∈ C − E. But then consider H 0 = H2 − (C ∩ H2 ) + (C − H2 ) i.e., on C
replace the edges of H2 by those of C − H2 . Now, H 0 ⊂ G. Since H2 is a 1-factor, so is H 0 and thus
we get a contradiction. Thus S is a bad set as required.

The above proof of Lovasz can be found in [Diestel 2000].

Corollary 6.2.4 (Defective version). A graph G contains a subgraph H with a 1-factor with |V (H)| ≥
|V (G)| − d iff o(G − S) ≤ |S| + d for all S ⊂ V .

Proof. Let G contain a subgraph H with a 1-factor with |V (H)| = |V (G)|−k for some k ≤ d. Consider
G0 := G ∨ Kk . Then G0 has a 1-factor by considering H along with a matching between V (G) − V (H)
and Kk . Let S ⊂ V . Then o(G0 −(S t[k])) ≤ |S|+k as G0 has a 1-factor. Further, G0 −(S t[k]) = G−S
and so o(G − S) ≤ |S| + k ≤ |S| + d.
Let o(G − S) ≤ |S| + d for all S ⊂ V and d is the minimal such number i.e., d = max{o(G −
S) − |S| : S ⊂ V }. We will assume d ≥ 1 else d = 0 and Tutte’s 1-factor theorem applies. Suppose
d = o(G − S0 ) − |S0 | for some S0 ⊂ V . Then the parity of vertices in V is same as that of an odd
multiple of d + |S0 | (= o(G − S0 )) plus S0 . The parity of the latter is same as an odd multiple of d
which is nothing but the parity of d i.e., |V | and d are both even or odd.
Let G0 := G ∨ Kd and by above argument |V 0 | is even. We will show that there exists a 1-factor in
G0 and this will imply that G contains a subgraph H with a 1-factor such that |V (H)| ≥ |V (G)| − d.
To show existence of 1-factor in G0 we will verify the Tutte’s condition.
Let S 0 ⊂ V t[d]. Suppose that [d]\S 0 6= ∅. Then G0 −S 0 is connected and hence o(G0 −S 0 ) ≤ 1 ≤ |S 0 |
unless |S 0 | = ∅. For S” = ∅, o(G0 − S 0 ) = 0 as |V 0 | is even. Suppose [d] ⊂ S 0 . Let S 0 = [d] t S for
some S ⊂ V . Hence G0 − S 0 = G − S and so o(G0 − S 0 ) = o(G − S) ≤ |S| + d = |S 0 |. Thus G0 satisfies
Tutte’s condition.

Definition 6.2.5 (f -factor ). Given a function f : V → N ∪ {0}, an f -factor of a graph G is a


subgraph H such that dH (v) = f (v) for all v ∈ V .

Tutte [1952] showed a necessary and sufficient condition for a graph G to have an f -factor. The
proof was by reducing it to a problem of checking for a 1-factor in a certain simple graph.
A graph construction : Given a function f and a graph G with f ≤ d, we define a graph H as
follows : Let e := d − f . To construct H replace each vertex v with a bi-partite graph Kv := Kd(v),e(v)
with vertex set A(v) t B(v). For edge (v, w) ∈ G, add an edge between a vertex of A(v) and A(w).

Theorem 6.2.6. G has an f -factor iff f ≤ d and the graph H constructed as above has a 1-factor.
Graph factors and Tutte’s theorem 41

Proof. If G has a f -factor, e(v) vertices in A(v) are unmatched. These can be matched arbitarily with
vertices of B(v), giving us a 1-factor of H.
Suppose H has a 1-factor. Remove B(v) and the vertices in A(v) matched to B(v). Now, at each
v, A(v) has f (v) vertices remaining. If we merge these f (v) vertices and call them v, we get a f -factor
of G.

Observe that we did not use the fact that G is a simple graph. Now Tutte’s 1-factor condition can
be translated to a f -factor condition. See[West 2001, Exercise 3.3.29]. An important application is
Erdös-Gallai characterization of degree sequences of graphs (see Section 6.5).

Exercise* 6.2.7. 1. A tree T has a perfect matching iff o(T − v) = 1 for all v ∈ T .

2. Show that 2α0 (G) = minS⊂V {|V (G)| − d(S)} where d(S) = o(G − S) − |S|.

3. For any k, show that there are k-regular graphs with no perfect matching.

4. If G is k-regular, has even number of vertices and remains connected when any (k − 2) edges are
deleted, then G has a 1-factor.

5. Every 3-regular graph with no cut-edge has a 1-factor.

6. Let Qn be the hypercube graph on 0, 1n . What are α(Qn ), α0 (Qn ), β(Qn ), β 0 (Qn ) ?

7. Every 3-regular simple graph with no cut-edge decomposes (i.e., edge-disjoint union) into copies
of P4 (the 4-vertex path).

8. Let G be a k-regular bipartite graph. Prove that G can be decomposed into r-factors iff r divides
k.

9. Is it true that every tree has at most one perfect matching ?

10. Let T be a tree on n vertices such that α(T ) = k. Can you determine α0 (T ) in terms of n, k ?

11. Characterize the graphs G for which the following statements hold. Justify your answers.
(1) (max. independent set) α(G) = 1.
(2) (max. size of matching) α0 (G) = 1.
(3) (min. vertex cover) β(G) = 1.
(4) (min. edge cover) β 0 (G) = 1.
NOTE : In each of the above, you are required to prove a statement of the form . . . (G) = 1 iff
G is . . ..
42 Matchings, covers and factors.

6.3 ***Gale-Shapley Stable marriage/matching algorithm***


In more real-life scenarios, like college admission, students have preferences for colleges (i.e., a ranking)
and so do the colleges. This in graph-theoretic terms, corresponds to a complete bi-partite graph with
every vertex ranking every other vertex in the opposite partition. Stability of matching here would
refer to the fact that there is no pair of vertices that would prefer each other over their current pairing
i.e., if u ∼ v and w ∼ x in M with u, w ∈ V1 , v, x ∈ V2 , then it is not possible that u prefers x to v
and x prefers u to w.
Does such a matching for any possible ? And more importantly, can one construct such a matching
?
Gale-Shapley Algorithm : In 1962, David Gale and Lloyd Shapley ([Gale and Shapley 1962])
proposed an algorithm to achieve stable matching and this probably the best known of such algorithms.
Along with Alvin Roth, Lloyd Shapley was awarded the Noble prize in economics in 2012 for ”for the
theory of stable allocations and the practice of market design.”

1. Input : n Men, n Women and their preferences.

Step 1 : Each ”unengaged” man ”proposes” to the highest women who has not yet rejected him.

Step 2 : Each woman agrees to get ”engaged” to the ”highest proposer” in her list. If previously engaged
and current proposer is ranked hgiher, she rejects her the previous engagement. The other
proposers are also rejected.

Step 3 : If any ”engagement” is nullified, the man becomes ”unengaged”

Step 4: Go to Step 1 if there is an ”unengaged” man.

See [West 2001, Theorem 3.2.18] for the formal theorem statement and proof of the algorithm.

6.4 ***Shannon rate of communication***


See [Aigner et al. 2010, Chapter 37] for more details. Suppose V is a set of symbols to be commu-
nicated over a network. However, some symbols are likely to be confused. How best to choose a set
of symbols that cannot be confused ? Place an edge between two symbols that can be confused and
form a graph G. Let us call G, the confusion graph. A set of symbols that cannot be confused is an
independent set. Best strategy for communication is to choose independent set.
A different question leading to the same answer. V is set of people trying to communicate but two
neighbours cannot communicate at the same time. The best strategy is to choose an indepndent set
of vertices and they can communicate at the same time.
In the first problem, if we are allowed to communicate over multiple rounds, can we do better ?
Given graphs G, H, define G × H as the graph with vertex set V (G) × V (H) and edges (u1 , u2 ) ∼
(v1 , v2 ) if (u − 1, v1 ) 6= (u2 , v2 ) and ui ∼ vi , i = 1, 2. Then, the confusion graph for strings of length 2
is G2 := G × G. Similarly, the confusion graph for strings of length n is Gn .
***Erdös-Gallai Theorem*** 43

In G, the rate of information per symbol that

6.5 ***Erdös-Gallai Theorem***


A list of non-negative integers (d1 , . . . , dn ) is said to be graphic if these are vertex degrees of a n-vertex
simple graph. We present the famous theorem giving a necessary and sufficient condition for existence
of simple graphs with a given degree sequence. We have seen such a condition for trees already in
Theorem 4.1.5.

Theorem 6.5.1 (Erdös-Gallai Theorem, 1960 ). A list of non-negative integers (d1 , . . . , dn ) in non-
P
increasing order is graphic iff i di is even and for all 1 ≤ k ≤ n, we have that

k
X n
X
di ≤ k(k − 1) + min{di , k}.
i=1 i=k+1

We present the recent proof due to [Tripathi 2010]. One another proof using Tutte’s f -factor
theorem (Theorem 6.2.6) can be found in [West 2001, Exercise 3.3.29]. There is also a simple proof
in [Choudum 1986].
We call a graph G on vertices v1 , . . . , vn , is said to be a subrealization of a nonincreasing list
(d1 , . . . , dn ) if d(vi ) ≤ di for all 1 ≤ i ≤ n. For a subrealization, we say r is the critical index if
d(vi ) = di for 1 ≤ i < r and it is the largest such index. The sufficiency part (and the non-trivial
one) of the proof proceeds as follows : We start with a subrealization with n vertices and no edges.
Assuming d1 6= 0, we set r = 1. When r ≤ n, we do not change the degree of r but try to increase the
degree of vertex vr and reduce the deficiency dr − d(vr ).

Proof. The necessity is argued as follows : Evenness is trivial. If we look at the sum of the degrees of
the largest k vertices, then the edges are either among the k vertices or to outside. The edges among
the k vertices are counted twice and hence at most twice that of Kk i.e., k(k − 1). The edges from
1, . . . , k to i for i > k is at most min{di , k}.
Now, we prove the sufficiency as described above : Given critical index r, define S = {vr+1 , . . . , vn }.
Our construction shall give that S is an independent set at every step. This is true for r = 1.

• Case 0 : vr  vi for some vi such that d(vi ) < di . Add edge (vi , vr ).

• Case 1 : vr  vi for some i < r. Since d(vi ) = di ≥ dr > d(vr ), there exists u ∼ vi , u 
vr , u 6= vr . If dr − d(vr ) ≥ 2, replace (u, vi ) with (u, vr ), (vi , vr ). If dr − d(vr ) = 1, then since
P
i (di − d(vi )) is even, there is an index k > r such that d(vk ) < dk . Either add edge (vr , vk ) if
the edge is not present and if not replace (u, vi ), (vr , vk ) with (u, vr ), (vi , vr ).

• Case 2: v1 , . . . , vr−1 ∈ N (vr ) and d(vk ) 6= min{r, dk } for some k > r. Since d(vk ) ≤ dk and S
is independent, so d(vk ) ≤ r. Thus, we have that d(vk ) < min{r, dk }. If vr  vk , we can use Case
44 Matchings, covers and factors.

0. If not, since d(vk ) < r, there exists i < r such that vk  vi . Since d(vi ) = di ≥ dr > d(vr ),
there exists u ∼ vi , u  vr , u 6= vr . Remove (u, vi ) and add (u, vr ), (vi , vk ).

• Case 3: v1 , . . . , vr−1 ∈ N (vr ) and vi  vj for some i < j < r : Since d(vi ) ≥ d(vj ) > d(vr ),
we have that there exists u ∼ vi , u  vr , u 6= vr and w ∼ vj , u  vr , w 6= vr . It is possible that
u = w. If u, w ∈ S, then delete (u, vi ), (w, vj ) and replace with (vi , vj ), (u, vr ). If u ∈
/ S or w ∈
/ S,
apply the arguments as in Case 1.

Suppose that none of the above cases apply. Since Case 1 doesn’t apply, v1 , . . . , vr−1 ∈ N (vr ) and
since Case 3 also doesn’t apply, v1 , . . . , vr are all pairwise adjacent. Case 2 also doesn’t apply and
hence d(vk ) = min{r, dk } for all k > r. Since S is independent, we have that ri=1 d(vi ) = r(r − 1) +
P
Pn Pr
i=r+1 min{r, dk } and hence i=1 (di − d(vi )) ≤ 0 by the Erdös-Gallai condition. Thus, we get that
d(vi ) = di for all 1 ≤ i ≤ r and we have eliminated deficieny at r. We can now increase r by 1 and
continue with the procedure as above.

6.6 ***Equivalent theorems to Hall’s matching theorem and more


applications***
There are various powerful theorems in combinatorics and other areas that are equivalent to Hall’s mar-
riage theorem. See https://en.wikipedia.org/wiki/Hall’s_marriage_theorem#Logical_equivalences
We shall see two of them later - Max-flow min-cut theorem and Menger’s theorem (see Chapter
7).
Another non-trivial application of Hall’s marriage theorem is to show the existence of Haar measure
on compact Topological groups. See [Krishnapur 2019, Part 4] for details on this result and also for
more about the equivalences mentioned above.
Chapter 7

Flows on networks, vertex and edge


connectivity

7.1 Max-flow min-cut theorem




Given a simple graph G = (V, E), we set E to be the set of ordered edges of E whereby we give both
←−
the orientations to an edge e ∈ E. In other words, e ∈ E implies that e = (x, y) 6= (y, x) and we
rather denote −e = (y, x) in such a case. Further, we set e− = x, e+ = y.


Definition 7.1.1. (Flow ) Let s, t ∈ V (source, target). A flow f from s to t is a function f : E → R
such that

1. (anti-symmetry) f (e) = −f (−e)

2. (Kirchoff ’s node law ) (d∗ f )(x) :=


P
e:e− =x f (e) = 0 for all x 6= {s, t}.

3. (Positive output at source :) (d∗ f )(s) ≥ 0.

Definition 7.1.2. Let c : E → [0, ∞] be the capacity function. A flow from s to t is said to satisfy
the capacity constraints if f (e) ≤ c(e) and we call such a flow feasible.

One can suitably define in-flow and out-flow at a vertex and show that Kirchoff’s node law implies
that the in-flow and out-flow are equal at all vertices except the sink. We can further define value
of a flow v(f ) := d∗ f (s) and it can be shown using Kirchoff’s node law and anti-symmetry that
v(f ) = −d∗ f (t).
Let S ⊂ V . We call a pair (S, S c ) a (s, t)-cut if s ∈ S, t ∈
P
/ S. Defining C(X, Y ) = x∈X,y∈Y c(x, y),
f (X, Y ) = x∈X,y∈Y f (x, y), we can show that v(f ) = f (S, S c ) ≤ C(S, S c ) for any (s, t)-cut (S, S c )
P

and any feasible s − t flow. Thus, we have that sup{v(f ) : f is a (s, t)-flow} ≤ inf S:s∈S,t∈S c
/ C(S, S ).

Theorem 7.1.3 (Max-flow min-cut theorem ; Elias-Feinstein-Shannon and Ford-Fulkerson (1956) ).

max{v(f ) : f is a feasible (s, t)-flow} = inf C(S, S c ).


S⊂V :s∈S,t∈S
/

45
46 Flows on networks, vertex and edge connectivity

It will soon become clear that Max-flow min-cut theorem is not a single theorem but a class of
theorems that can be proven under different frameworks using variants of the ideas we will use to
prove the above theorem - Theorem 7.1.3.

Lemma 7.1.4. If there exists an infinite capcity s − t path, then the max-flow is infinite and so is the
min-cut. Else, the min-cut is finite and so if the max-flow.

Proof. The proof of first path follows by constructing a sequence of flows with increasing strenghts.
We will prove the second part alone. Let there be no infinite capcity s − t path. We construct a finite
min-cut as follows : Choose an s − t path P1 and since it is not infinite, there exists e1 such that
c(e1 ) < ∞. Now repeat this procedure on G − e1 and choose an edge e2 in a s − t path P2 such that
c(e2 ) < ∞. Repeatedly, we can choose edges e1 , . . . , ek until G − {e1 , . . . , ek } has no s − t path. Let
S be the set of vertices in the component of s in G − {e1 , . . . , ek } and clearly t ∈ S c . Since s − t path
exists in G, we have that E ∩ (S × S c ) ⊂ {e1 , . . . , ek } and hence C(S, S c ) ≤ ki=1 c(ei ) < ∞.
P

Lemma 7.1.5. If the capacity function is bounded, then

sup{v(f ) : f is a feasible (s, t)-flow} = max{v(f ) : f is a feasible (s, t)-flow}.

Lemma 7.1.6. Let f be a s − t flow in a graph and let P = e1 . . . ek be a s − t path i.e., e− +


1 = s, ek = t

and e+ 0
i = ei+1 for all 1 ≤ i < k. Then for every  > 0, f defined as follows is also a flow :
f 0 (e) := f (e) for e, −e 6= e1 , . . . , ek , f 0 (e) = f (e) + , e = e1 , . . . , ek , f (e) = f (e) − , e = −e1 , . . . , −ek .
Further v(f 0 ) = v(f ) + .

Proof. Clearly anti-symmetry holds and (d∗ f 0 )(s) = 0 (e)


P P
e:e− =s f = e:e− =s f (e) +  ≥ 0. It remains
only to show that (d∗ f 0 )(v) = 0 for all v 6= s, t. This holds trivially for all v 6= e−
i , i = 2, . . . , k.
Suppose v = e−
i for some 1 < i ≤ k. Then

X X
(d∗ f 0 )(v) = f 0 (e) = f (e) + f 0 (ei ) + f 0 (−ei−1 )
e:e− =v e:e− =v,e6=ei−1 ,ei
X
= f (e) + f (ei ) + f (−ei−1 ) = (d∗ f )(v) = 0.
e:e− =v,e6=ei−1 ,ei

We first present the Ford-Fulkerson which gives the idea of the proof.

Remark 7.1.7 (Ford-Fulkerson Algorithm ). Given a flow f , define the residual capacity cf (u, v) =
c(u, v) − f (u, v).

Step 1: Set f ≡ 0.

Step 2 : If there is a path P (called f -augmenting path from s to t in G such that cf (u, v) > 0
for all edges (u, v) ∈ P , then go to Step 3 else go to Step 6.
Max-flow min-cut theorem 47

Step 3 : Find cf (P ) = min{cf (u, v) : (u, v) ∈ P } (residual capacity).

Step 4 : For each edge (u, v) ∈ P , set f (u, v) = f (u, v) + cf (P ) and f (v, u) = f (v, u) − cf (P ).

Step 5 : Go back to Step 2.

Step 6 : Output flow f as the maximal flow.

Defining Sf be the set of vertices that can be reached from s with a path P such that cf (u, v) > 0 for
all edges (u, v) ∈ P , note that Step 2 can also be rephrased as follows : If t ∈ Sf go to Step 3 else go
to Step 6.

Proof. (Theorem 7.1.3) Suppose that the capacity function is bounded. We will prove the theorem
under this assumption and then argue the other case.
Let f be the max-flow whose existence is guaranteed by Lemma 7.1.5. Define

S = Sf = {v : there exists a positive residual s − v path} ∪ {s}.

/ S, then [S, S c ] is as s − t cut. Also, if e ∈ S × S c , then cf (e) = 0 as otherwise e+ ∈ S which


If t ∈
is a contradiction. As we argued before and by the zero residular property of edges in S timesS c , we
have that
v(f ) = f (S, S c ) = C(S, S c )

and so the theorem is proved as we already have the inequality.


If t ∈ S, then there is a positive residual path P . If we increase the capacity along the path by
cf (P ) as in Lemma 7.1.6 and define a new flow f 0 then v(f 0 ) = v(f ) + cf (P ). This will contradict the
maximality of f if we show that it satisfies the capacity constraints. Trivially, the capacity constraint
/ P as the f 0 (e) = f (e) for all e, −e ∈
is satisfied for all e, −e ∈ / P . If −e ∈ P , then f 0 (e) ≤ f (e) ≤ c(e).
If e ∈ P , then f 0 (e) = f (e) + cf (p) ≤ f (e) + cf (e) = c(e) and so the capacity constraint is satisfied
everywhere and the proof is complete.
Now let us assume that the capacity function is unbounded. But since the min-cut is finite, choose a
cut [A, Ac ] such that c(A, Ac ) < ∞. Define c0 such that c0 (e) = c(e)1[c(e) < ∞] + c(A, Ac )1[c(e) = ∞].
Observe that the min-cut under c0 is same as that in c. Further, if a flow is feasible w.r.t c0 then it
is feasible w.r.t. c as well. Since c0 is a bounded capacity function, we have a max-flow f such that
v(f ) = min-cut under c0 . But then, it also holds that v(f ) = min-cut under c and f is feasible under
c as well.

Exercise 7.1.8. The last part of the proof shows that Lemma 7.1.5 holds under the assumption of
finite min-cut alone i.e., the assumption of bounded capacity can be relaxed considerably. Give a direct
proof of this without using Max-flow Min-cut theorem.

Theorem 7.1.9. Assume that the min-cut is finite. F-F Algorithm terminates if the capacities are
integral and also gives that if capacities are integral, there is an integral maximal flow.
48 Flows on networks, vertex and edge connectivity

Proof. If capacities are integral, the min-cut is integral and so is the max-flow. At evey step the F-F
algorithm increases the flow strength by at least one as the residual capacity along any augmenting
path is a positive integer . Since the max-flow is finite, in finitely many steps the algorithm outputs
the max flow.
The algorithm starts with f ≡ 0 and at every step, the flow on any edge is increased/decreased by
the residual capcity if the edge is in a f -augmenting path. Since the residual capacity is integral, the
flow is always integral and so is the max-flow.

Example 7.1.10. The F-F algorithm need not terminate when capacities are non-integral and here is
an example. See https: // en. wikipedia. org/ wiki/ Ford_ Fulkerson_ algorithm# Non-terminating_
example

The definition of a flow can be extended to multiple sources and targets. Let S ⊂ V, T ⊂ V be
the sources and sinks respectively. The definition of flow can be modified by requiring the Kirchoff’s
/ S ∪ T and positive output at S i.e., s∈S (d∗ f )(s) ≥ 0. An S − T cut is
P
node law to hold for all x ∈
A ⊂ V such that S ⊂ A, T ⊂ Ac . We again have a max-flow min-cut theorem as follows.

Theorem 7.1.11 (Max-flow min-cut theorem for multiple sources and sinks.).

max{v(f ) : f is a feasible S − T -flow} = inf C(A, Ac ).


A⊂V :S⊂A,T ⊂Ac

Proof. Let us again assume that the min-cut is finite i.e., there is no infinite capacity S − T path. The
trivial inequality follows as in the original theorem and also the fact that the supremum is maximum.
To argue the equality, instead of repeating the proof, we shall use a reduction. Define G0 as follows :
V 0 = V ∪ {a, b}, E 0 = E ∪ (a × S) ∪ (T × b). Set the capacity of the new edges to be infinite. Note that
any S − T cut in G is a a − b cut in G0 . Further, any a − b cut involving edges in E 0 \ E has infinite
capacity. Hence the min a − b cut in G0 and G are equal.
If f 0 is a a − b flow in G0 , let f be the restriction of f 0 to G. Clearly f is skew-symmetric and
satisfies Kirchoff’s node law. We verify the last condition as follows : By Kirchoff’s node law in G0 ,
we have that
X X X X
0= (d∗ f 0 )(s) = f (e) + f (e) = (d∗ f )(s) − (d∗ f 0 )(a).
s∈S e:e− ∈S,e+ ∈V e:e− ∈S,e+ =a s∈S

∗ = (d∗ f 0 )(a) = v(f 0 ) ≥ 0 and so the max-flow in G0 is equal to the max-flow


P
Thus, v(f ) = s∈S (d f )(s)
in G. Hence the theorem follows from the single-source single-sink max-flow min-cut theorem.

A more general version of max-flow min-cut theorem is as follows : Let G = (V, E) be a directed
graph, s, t ∈ V and c : E → [0, ∞) be a capacity constraint function. Then f : E → [0, ∞) is a s − t
flow satisfying capacity constraint c if
P P
1. e:e− =x f (e) = e:e+ =x f (e) for all x 6= s, t. (conservation of flow / Kirchoff ’s node law).
Vertex and edge connectivity 49

2. For all e ∈ E, f (e) ≤ c(e). (edge capacity constraint).


P P
3. |f | := e:e− =s f (e) − e:e+ =s f (e) ≥ 0. (flow strength is non-negative.)

As before, a subset S ⊂ V is a directed s − t cut if s ∈ S, t ∈


/ S. Further capcity of a cut is defined as
P
c(S) := e∈E∩(S×S c ) c(e).

Exercise* 7.1.12. (Max-flow min-cut theorem for directed graphs.) Under the notation as above, we
have that

max{|f | : f is a s − t flow satisfying capacity constraint c } = min{C(S) : S ⊂ E is an s − t cut}.

Proof Sketch : An f -augmenting x − y path is a path P : x = x0 , . . . , xk = y such that for


each 1 ≤ i ≤ k either (i) c(xi−1 , xi ) − f (xi−1 , xi ) > 0 or (ii) f (xi , xi−1 ) > 0. In simple words,
either the ‘forward’ edges are not of full capacity or the ‘backward’ edges have positive flow. Define
cf (xi−1 , xi ) = c(xi−1 , xi ) − f (xi−1 , xi ) in Case (i) and cf (xi−1 , xi ) = f (xi , xi−1 ) in Case (ii). If both
cases hold, pick one arbitarily as cf (xi−1 , xi ). As before set cf (P ) = min cf (xi−1 , xi ). Show that
the flow can be increased by adding cf (P ) to the ‘forward’ edges and subtracting cf (P ) from the
‘backward’ edges. Now use the proof idea of Theorem 7.1.3 to complete the proof.
A general version of max-flow min-cut theorem is stated in the exercises.

Exercise* 7.1.13. Use max-flow min-cut theorem for directed graphs to show the undirected version.

Exercise 7.1.14. What is the equivalent of Ford-Fulkerson algorithm for flows on directed graphs.

Exercise* 7.1.15. Does the F-F algorithm terminate is the capacities are rational ?

7.2 Vertex and edge connectivity


We say that a graph is k-vertex (resp. k-edge) connected if |V | > k and G − S is connected for any
S ⊂ V (resp. |V | > 1 and S ⊂ E) with |S| < k. Define κ(G) (resp. λ(G)) is the largest k such that G
is k-vertex (resp. k-edge) connected. We assume that G = ∅ is disconnected and so G is 0-vertex or
0-edge connected iff G 6= ∅. More easily, G is 1-vertex or 1-edge connected iff G is connected. Also,
by the above convention λ(K1 ) = κ(K1 ) = 0.

Exercise 7.2.1. Compute λ(G), κ(G) for the well-known graphs such as complete graph, path graph,
cycle graph, trees, Cayley graph, petersen graph et al.

Theorem 7.2.2 (Whitney (1932a)). Let G 6= ∅. Then κ(G) ≤ λ(G) ≤ δ(G).

Proof. Removing edges on the vertex with minimum degree proves the first inequality. By definition
κ(G) ≤ n − 1 where n = |V |. Let [S, S c ] := E ∩ (S × S c ) be the smallest edge-cut i.e., λ(G) = |[S, S c ]|
(see Exercise 7.2.4 as to why the smallest edge-cut will be of this form). Assume that S, S c 6= ∅ i.e.,
G is connected. We will show that κ(G) ≤ |[S, S c ]|.
50 Flows on networks, vertex and edge connectivity

If E ∩ (S × S c ) = (S × S c ) (i.e., every vertex in S is adjacent to every vertex in S c ), then


|[S, S c ]| = |S||S c | = |S|(n − |S|) ≥ n − 1 ≥ κ(G).
/ S such that x  y. Define T := {z ∈ S c : z ∼ x} ∪ {z ∈ S − {x} :
Else choose x ∈ S, y ∈
N (z) ∩ S c 6= ∅}. Every x − y path has to pass through a vertex in T . In other words, there is no
x − y path in G − T . Thus, we have that κ(G) ≤ |T |. Now we complete the proof by showing that
|T | ≤ [S, S c ]. For this observe, it suffices to observe that

{(x, z) : z ∈ N (x) ∩ S c } ∪ {(z, z 0 ) : z ∈ S − {x}, N (z) ∩ S c 6= ∅} ⊂ E ∩ (S × S c ),

and the LHS has cardinality at least |T |. In other words, we have selected all the edges from x to S c
and for all other z ∈ S − {x}, we have selected one edge each to get at least |T | edges.

7.2.1 Menger’s theorem

Similar to Max-flow min-cut theorem, Menger’s theorem also can be proven under differing frameworks.
We shall prove two of them and leave the rest as exercises.

Theorem 7.2.3. (Menger’s theorem for edge-connectivity on digraphs) Let s 6= t in a di-graph G.


Then we have that

λ(s, t) := max{k : there exist k edge-disjoint s − t directed paths }


= min{|E 0 | : E 0 is an s − t directed cut.} =: C(s, t),

where E 0 is an s − t directed cut if every directed path from s to t contains at least one edge in E 0 .

The proof proceeds by first showing that deletion of incoming edges at s and outgoing edges at t
do not change the LHS or RHS. Then, we shall show that by setting c ≡ 1 on E, the LHS equals the
max-flow and the RHS equals the min-cut thereby proving the theorem via max-flow min-cut theorem
for digraphs.

Proof. Assume that λ(s, t) > 0 else the theorem holds trivially.
Observe that λ(s, t), C(s, t) remain unchanged if we delete incoming edges at s and outgoing edges
at t and so we shall assume that there are no incoming edges at s and outgoing edges at t.
Suppose there are k edge-disjoint paths. Then, any s − t edge-cut E 0 has to contain at least one
edge from each of the disjoint paths and hence |E 0 | ≥ k. Thus, we trivially get that λ(s, t) ≤ C(s, t)
as in the Max-flow Min-cut theorem.
Now we construct a directed network on G by assigning capacity 1 to every edge. Since c ≡ 1,
the max-flow f is integral and further f ∈ {0, 1}. Trivially, if v(f ) ≥ λ(s, t) as we can send a unit
flow along each of the disjoint paths and strength and flow properties are preserved under addition.
Consider the graph Gf := (V, {e : f (e) = 1}). Since v(f ) ≥ 1, there is a s − t path P1 in the graph
Gf (see Exercise 7.2.5 for a more general claim). Now consider the graph Gf − P1 . This is nothing
Vertex and edge connectivity 51

but the graph Gf1 := (V, {e : f1 (e) = 1}) where f1 (e) = f (e) − 1[e ∈ P1 ]. Since e → 1[e ∈ P1 ] is also
a flow, by linearity f1 is also a flow and further v(f1 ) = v(f ) − 1. Now, the same argument as above
yields that there is a s − t path P2 in Gf − P1 if v(f1 ) ≥ 1 and also P2 is edge-disjoint of P1 . Repeating
this argument, we can obtain v(f ) many edge-disjoint s − t paths i.e., λ(s, t) = M F (s, t) := max-flow
in G.
By our definition of capacity, note that C(S, S c ) = |E ∩ (S × S c )| for s ∈ S, t ∈
/ S. Since

{E 0 ⊂ E : G − E 0 disconnects s, t} ⊃ {E ∩ (S × S c ) : s ∈ S, t ∈
/ S, S ⊂ V },

we have that C(s, t) ≤ M C(s, t) := min-cut in G. Thus we have that

M F (s, t) = λ(s, t) ≤ C(s, t) ≤ M C(s, t).

Now, from the Max-flow Min-cut theorem for directed graphs (Exercise 7.1.12), we have that the
inequalities are all equalities and the proof is complete.

Exercise 7.2.4. Give a direct proof that C(s, t) = M C(s, t).

Exercise* 7.2.5. Prove the following more general version of the claim used in the above proof : If f
is a s − t flow with v(f ) > 0, then there exists a directed s − t path in the graph Gf := {e : f (e) > 0}.

S ⊂ V is an s − t vertex cut if any directed path from s to t has a vertex in S.

Theorem 7.2.6. (Menger’s theorem for vertex-connectivity on digraphs) Let s 6= t in a di-graph G


such that (s, t) ∈
/ E. Then we have that

max{k : there exist k vertex-disjoint s − t directed paths } = min{|E 0 | : E 0 is an s − t vertex cut.}

Proof. One can try to prove a Max-flow Min-cut theorem for vertex capacities and then use the same
to prove Menger’s theorem. But we will use a graph transformation to reduce the vertex-connectivity
case to edge-connectivity case itself.
Consider the following enhanced graph G0 : V 0 := {s, t} ∪ {x1 , x2 : x ∈ V − {s, t}} and as for the
edges E 0 , s ∼ x1 if s ∼ x in G ; x2 ∼ t if x ∼ t in G ; x2 ∼ y1 if x ∼ y in G ; x1 ∼ x2 for all x1 , x2 .
Set c(x1 , x2 ) = 1 for all x and c ≡ ∞ otherwise.
Note that any directed s − t path in G0 is of the form sa1 a2 b1 b2 . . . z1 z2 t where sab . . . zt is the
corresponding directed s − t path in G. Further if P1 , P2 , . . . , Pk are vertex-disjoint s − t paths in G,
the corresponding paths in G0 are trivially edge-disjoint as well. Conversely, let P10 , P20 be two edge
disjoint paths in G0 , say sa1 a2 b1 b2 . . . z1 z2 t and sa01 a02 b01 b02 . . . z10 z20 t respectively. Then edge-disjointness
implies that (a1 , a2 ) 6= (a01 , a02 ), . . . , (z10 , z20 ) and similarly (b1 , b2 ) and so on. Thus, the corresponding
paths P1 , P2 in G are vertex-disjoint. Thus we have that

λ0 (s, t) := max edge-disjoint s − t paths in G0 = max vertex-disjoint s − t paths in G =: κ(s, t).


52 Flows on networks, vertex and edge connectivity

If S ⊂ V − {s, t} is a s − t vertex cut in G, then {(x1 , x2 ) : x ∈ S} is an s − t edge-cut in G0 . In


other words, if C 0 (s, t) is the min edge-cut in G0 and V C(s, t) is the min vertex-cut in G, then we
have from the above inclusion that C 0 (s, t) ⊂ V C(s, t). Consider an edge-cut E 00 ⊂ E 0 in G0 such that
E” ( {(x1 , x2 ) : x ∈ V − {s, t}}. Then C(E 00 ) = ∞ and cannot be the min edge-cut in G0 . Thus the
min edge-cut in G0 comes from edges in {(x1 , x2 ) : x ∈ V − {s, t}} which in turn arise via s − t vertex
cuts in G. Hence C 0 (s, t) = V C(s, t).
Since λ0 (s, t) = κ(s, t) and C 0 (s, t) = V C(s, t), Menger’s theorem for vertex connectivity follows
from the Max-flow Min-cut theorem for directed graphs (Exercise 7.1.12) or Menger’s theorem for
edge connectivity (Theorem 7.2.3).

Exercise* 7.2.7. (Menger’s theorem for edge connectivity ; Menger (1927)) Let G be a finite undi-
rected graph and u and v two distinct vertices. Then the size of the minimum edge cut for v and v
(the minimum (u, v)-edge cut) is equal to the maximum number of pairwise edge-disjoint paths from
u to v.

Exercise* 7.2.8. (Menger’s theorem for vertex connectivity ; Menger (1927)) Let G be a finite undi-
rected graph, and let u and v be nonadjacent vertices in G. Then, the maximum number of pairwise-
internally-disjoint (u, v)-paths in G equals the minimum number of vertices from V (G) − u, v whose
deletion separates u and v.

See https://en.wikipedia.org/wiki/Menger’s_theorem for versions of Menger’s theorem and


variants.

7.3 ***Some applications***

We can prove Hall’s marriage theorem as well as Konig-Egervary theorem using max-flow min-cut or
Menger’s theorem. See exercises in the next section. Also, see Section 6.6.
The max-flow min-cut theorem can be derived from a more powerful theorem called the strong
duality theorem in linear programming (see https://en.wikipedia.org/wiki/Max-flow_min-cut_
theorem#Linear_program_formulation). This latter theorem for example can be used to prove
the Monge-Kantorovich duality theorem in Optimal transport theory (https://en.wikipedia.org/
wiki/Transportation_theory_(mathematics)).

7.4 Exercises

1. Let G be a graph and S, T ⊂ V (G) such that S ∩ T = ∅. Formulate an appropriate notion of a


vertex cut and prove a version of the vertex form of the Menger’s theorem for the following two
scenarios.
Exercises 53

(a) Consider the maximum number of disjoint paths from S to T such that the paths do not
intersect even at S and T .
(b) Consider the maximum number of disjoint paths from S to T such that the paths are
allowed to intersect at S or T .

2. Show that edge connectivity and vertex connectivity are equal if ∆(G) ≤ 3.

3. If G is a connected graph and for every edge e, there are cycles C1 and C2 such that E(C1 ) ∩
E(C2 ) = {e} then G is 3-edge connected.

4. What is the vertex and edge connectivity of the Petersen graph ?

5. Let F be a non-empty set of edges in G. Prove that F is an edge-cut of the form F = E ∩(S ×S c )
for some S ⊂ V in G iff F contains an even number of edges from every cycle C.

6. If G is a k-connected graph and a new graph G0 is formed by adding a new vertex y with at
least k neighbours in G, then G0 is also k-connected.

7. Let G be a graph with at least 3 vertices. Show that G is 2-connected ⇔ G is connected and
has no cut-vertex ⇔ for all x, y ∈ V (G) there exists a cycle through x and y ⇔ δ(G) ≥ 1 and
every pair of edges lies in a common cyle.

8. Consider a directed graph G = (V, E) and s, t ∈ V . Further assume that there are no incoming
edges at s or no out-going edges at t. An elementary s − t flow is a flow f which is obtained by
assigning a constant positive value a to the edges on a simple directed s − t path and 0 to all
other edges. Show that every flow is a sum of elementary flows and a flow of strength 0.

9. Consider a directed graph G = (V, E) and s, t ∈ V . Further assume that there are no incoming
edges at s or no out-going edges at t. If f is an integral flow of strength k, show that there exist
k directed paths p1 , . . . , pk such that for all e ∈ E, |{pi : e ∈ pi }| ≤ f (e).

10. * (Generalized max-flow min-cut theorem :) Let G = (V, E) be a directed graph, s, t ∈ V and
c : (V − {s, t}) ∪ E → [0, ∞) be a capacity constraint function. Then f : E → [0, ∞) is a s − t
flow satisfying capacity constraint c if
P P
(a) e:e− =x f (e) = e:e+ =x f (e) for all x 6= s, t. (conservation of flow / Kirchoff ’s node law).
(b) For all e ∈ E, f (e) ≤ c(e). (edge capacity constraint).
P
(c) e:e+ =x f (e) ≤ c(x) for all x 6= s, t. (vertex capacity constraint.)
P P
(d) |f | := e:e− =s f (e) − e:e+ =s f (e) ≥ 0. (flow strength is non-negative.)

A subset S ⊂ V ∪ E is an s − t cut if every path from s to t passes through either a vertex or


P P
an edge in S. Further capcity of a cut is defined as c(S) := v∈S c(v) + e∈S c(e).
54 Flows on networks, vertex and edge connectivity

Show that

max{|f | : f is a s − t flow satisfying capacity constraint c } = min{C(S) : S ⊂ V ∪ E is an s − t cut}.

11. * Can you state the defective version of Hall’s marriage theorem as a max flow-min cut theorem
?

Question 7.4.1. Can you state Tutte’s 1-factor theorem also as a max-flow min-cut theorem ?
Chapter 8

Chromatic number and polynomials

8.1 Graph coloring

Definition 8.1.1. (Coloring of a graph ) A graph is k-colorable if ∃ c : V → [k] such that c(u) 6= c(v)
if (u, v) ∈ E. The chromatic number χ(G) is defined as

χ(G) := inf{k : G is k-colorable}.

Trivially, we have that cl(G) ≤ χ(G) ≤ ∆(G) + 1 where cl(G) is the size of the largest clique
(complete subgraph) in G. Further, a graph is k-colorable iff we can partition into k independent sets.

Exercise 8.1.2. G is k-colorable if there exists an homomorphism from G to Kk .

8.2 Chromatic Polynomials

Let G be a graph on n vertices. Let P (G, q), q ∈ N∗ be the number of ways of coloring (properly) a
graph with q colours. More formally,

P (G, q) = |{c : V → [q] : c(u) 6= c(v)∀u ∼ v}|.

We will now show that this is a monic polynomial of degree n and other interesting properties.
Let G − e denote the graph with the edge e removed. Let G/e denote the graph with e contracted
i.e., if e = (v, w) then we replace the vertices v, w with v 0 and edges v ∼ x, w ∼ y with v 0 ∼ x, v 0 ∼ y.
Observe that G/e can be a multigraph. But this will not matter to us as proper colorings of a
multigraph and its corresponding simple graph are the same.

Proposition 8.2.1. For any edge e = (v, w) ∈ G,

P (G, q) := P (G − e, q) − P (G/e, q), q ∈ N.

55
56 Chromatic number and polynomials

Proof. The proof follows by two simple observations that any proper q-coloring of G is a proper q-
coloring of G − e in which v and w receive distinct colours and any proper q-coloring of G − e in which
v and w receive same colours is a proper q-coloring of G/e.

If E = ∅, trivially P (G, q) = q n . Now using Proposition 8.2.1, we can inductively show that
P (G, q) is a polynomial in q. Hence, we shall define P (G, x), x ∈ R to be the polynomial such that
P (G, q) is the number of proper q-colorings of G for q ∈ N∗ . Here are some very important proeprties.

Lemma 8.2.2. 1. P (G, x) is a monic polynomial of degree n.

2. P (G, x) is the unique polynomial such that P (G, q) is the number of proper q-colorings of G for
q ∈ N.

3. χ(G) = min{k ∈ N : P (G, k) > 0}.


Now, let us assume that P (G, x) = ni=1 ai xi (as a0 = 0 trivially). Then we have that
P

Pn
4. i=1 ai = 0 or P (G, x) = xn .

5. an−1 = −|E|.

6. an−i = (−1)i |an−i |.

Proof. All the proofs will follow from deletion-contraction principle and induction. For abbreviation,
we call this DCI argument in this section.

1. Suppose P (G − e, x) is a monic polynomial of degree n and P (G/e, x) is a monic polynomial of


degree n − 1, then by DC principle P (G, x) is a monic polynomial of degree n. Now the proof
can be completed by induction.

2. The proof follows because two polynomials are equal if they agree at 1, . . . , n, . . . ,.

3. This follows trivially by the two definitions.

4,5 and 6 : We will apply the DCI argument again and prove (4),(5) and (6) together. The three identities
can be verified for E = ∅ or G has 1 or 2 vertices easily. Let

n
X n−1
X n
X
P (G − e, x) = (−1)i |bn−i |xn−i , P (G/e, x) = (−1)i |cn−1−i |xn−1−i = (−1)i−1 |cn−i |xn−i .
i=0 i=0 i=1

Then by DC principle we have that


n
X
P (G, x) = |bn |xn + (−1)i [|bn−i | + |cn−i |]xn−i .
i=1

Pn
As for (4), observe that i=1 ai = P (G, 1) = no. of proper 1-coloring of G.If |E| =
6 ∅, there
exists no proper 1-coloring of G and so P (G, 1) = 0. Else P (G, x) = xn as required.
Chromatic Polynomials 57

(6) follows easily as an−i = (−1)i [|bn−i |+|cn−i |]. As for (5), observe by monicity of the chromatic
polynomial, induction and DC principle that that

|E(G)| = |E(G − e)| + 1 = −bn−1 + cn−1 = |bn−1 | + |cn−1 | = −an−1 ,

where in the second equality we have used (6) for bn−1 .

Lemma 8.2.3. If G has k-components G1 , . . . , Gk then

k
Y
P (G, x) = P (Gi , x)
i=1

and further a0 = . . . = ak−1 = 0, |ak | > 0.

Proof. First equality follows trivially because any combination of q-colorings of each component gives
a q-coloring of the full graph G i.e.,

k
Y
P (G, q) = P (Gi , q), q ∈ N∗ .
i=1

Since deg(P (Gi , x)) ≥ 1, we have that a0 = . . . = ak−1 = 0, |ak | > 0.

|X| xβ0 (X)


P
Theorem 8.2.4. P (G, x) = X⊂E (−1) where β0 (X) = β0 ((V, X)).

|X| q β0 (X)
P
Proof. Enough to show that P (G, q) = X⊂E (−1) for q ∈ N∗ . Let c : V → [q] be an improper
coloring if c(u) = c(v) for some u ∼ v. Let IC be the set of all improper colorings. For e = (u, v)
define Be := {c ∈ IC : c(u) = c(v)}. Then we have that

P (G, q) = q n − |IC| = q n − | ∪e∈E Be |


X
= qn − (−1)|X|−1 | ∩e∈X Be |
∅)X⊂E
X
=q +n
(−1)|X| | ∩e∈X Be |.
∅)X⊂E

To complete the proof, enough to prove that for ∅ ) X ⊂ E, | ∩e∈X Be | = q β0 (X) . Suppose X =
{e1 , . . . , ek } where ei = (ui , vi ), 1 ≤ i ≤ k. Let C1 , . . . , Cm be the components in (V, X) i.e., m = β0 (X)
. It is possible that ui = uj or vj . Thus, we have that

∩e∈X Be = {c : V → [q] : c(ui ) = c(vi ), 1 ≤ i ≤ k}


= {c : V → [q] : c is a constant on each Ci , 1 ≤ i ≤ m}
58 Chromatic number and polynomials

The latter is obtained by choosing one color for each component C1 , . . . , Cm and so its cardinality if
q m as required.

8.3 Exercises
1. Find the chromatic polynomial of complete graph Kn , cycle Cn , wheel graph Wn (i.e., the graph
obtained by adding a new vertex to Cn and connecting it to all the n vertices of Cn ), path graph
Pn and the Petersen graph.
m(m−1)
2. Show that the coefficient of xn−2 in the chromatic polynomial of a n vertex graph is 2 −T
where T is the number of triangles and m is the number of edges.

3. Show that a graph with n vertices is a tree iff P (G, x) = x(x − 1)n−1 .

8.4 ***Read’s conjecture and Matroids***


For more details, refer to https://en.wikipedia.org/wiki/Chromatic_polynomial#Properties,
http://www-math.ucdenver.edu/~wcherowi/courses/m4408/gtln6.htm and
http://matroidunion.org/?p=1301.

Question 8.4.1 (Read’s conjecture, 1968 ). |a0 |, . . . , |an | form a log-concave sequence i.e., |ai |2 ≥
|ai−1 ||ai+1 |.

Verify the conjecture in the cases you can compute the coefficients such as Kn , Cn , Pn et al.
This was solved recently by June Huh (in his 1st year of Ph.D.) in 2012 using ideas from algebraic
geometry.
The crucial DC principle holds for more general combinatorial objects called Matroids (https://
en.wikipedia.org/wiki/Matroid) and analogous to a chromatic polynomial, one can define what is
called the characteristic polynomial of a matroid . The characteristic polynomial of a matroid is defined
analogously to the identity in Theorem 8.2.4. Read’s conjecture was generalized to characteristic
polynomial of matroids and known as Rota-Welsh conjecture . This was proved in 2015 by Karim
Adiprasito, June Huh, and Eric Katz. An accessible exposition of these ideas can be found in the blog
Matt Baker ([Baker 2015]) and a little more technical account in his survey ([Baker 2018]).
Chapter 9

Graphs and matrices

See [Bapat 2010] for recollection of some basic linear algebra facts used.

9.1 Incidence matrix and connectivity




Let n = |V |, m = |E|. We shall assign an orientation to every edge e ∈ E and call E this set of
oriented edges. The orientation can be arbitrary.

Definition 9.1.1 (Incidence matrix ). ∂1 is a n × m matrix with ∂1 (i, j) = χej (vi ) where χe (v) =


1[v = e+ ] − 1[v = e− ] for e = (e− , e+ ) ∈ E , v ∈ V.

∂1 (i, j) e1 e2 ... em
 
v1 −1 0 ... 0
 
v2  1 −1 ... 1 
..  .. .. .. .. 
. 
 . . . . 

vn 0 0 ... −1

We set ∂0 = [1, . . . , 1] as a 1 × n-matrix.

Remark 9.1.2. We consider the matrices as a real matrices and our matrix algebra will be with
respect to R. One can consider any other field F instead of R as well.

Further set Cj = {0}, j ≤ −2, C−1 = R, C0 = { ni=1 ai vi : ai ∈ R}, C1 == { m


P P
i=1 ai ei : ai ∈


R, ei ∈ E }, where the sums are to be considered as a formal sum. Observe that C0 , C1 areR- vector


spaces and note that C0 ∼ = RV , C1 ∼= R E . The canonical basis for C0 is {v1 , . . . , vn } and for C1 is


{ei : ei ∈ E }. Here, we think of v1 = v1 + 0v2 + . . . + 0vn , e1 = e1 + 0e2 + . . . + 0em and similarly for
other vi ’s and ej ’s.
We shall view ∂0 , ∂1 as linear maps in the following sense :
n
X X
∂0 : C0 → C−1 , z = ai vi → ai = ∂0 [a1 , . . . , an ]t ,
i=1 i

59
60 Graphs and matrices

m
X X

∂ 1 : C1 → C0 , z = ai e i → ai (e+
i − ei ).
i=1 i
Pn
Suppose we represent ∂1 z = i=1 bi vi then (b1 , . . . , bn )t = ∂1 [a1 , . . . , am ]t . Further if the columns of
then (b1 , . . . , bn )t = m
P
∂1 matrix are C1 , . . . , Cm i=1 ai Ci .
Assume that G 6= ∅. We set Bi = Im(∂i+1 ), Zi = Ker(∂i ).

Remark 9.1.3. The above abstraction of representing C0 , C1 as formal sums seems a little unnecessary


as we could have also represented C0 , C1 as functions from V → R and E → R respectively. However,
this representation will be notationally convenient and is borrowed from algebraic topology where one
considers richer structures giving rise to further vector spaces such as C2 , C3 , . . .. Further, if we
choose Z-coefficients instead of R-coefficients, C0 , C1 are modules and not vector spaces. Thus, the
above representation allows us to carry over similar algebraic operations even in such a case. As
an exercise, the algebraically inclined students can try to repeat the proofs with Z-coefficients. See
[Edelsbrunner 2010] for an accessible introduction to algebraic topology and [Munkres 2018] for more
details.

Trivially, we have that B−1 = C−1 = R. Thus by the fundamental theorem of algebra, we have
that r(Z0 ) = n − 1 where by r(.) we denote the rank/dimension of a vector space. It is easy to see
that vi − vj ∈ Z0 , ∀i 6= j. Further, we have that v1 − vi , i = 2, . . . , n are linearly independent and hence
form a basis for Z0 . Observe that for z = m
P
i=1 bi ei ,

m
X m
X
∂0 ∂1 z = ∂0 ( bi (e+
i − e−
i )) = bi (∂0 e+ −
i − ∂0 ei ) = 0,
i=1 i=1

where in the first and third equalities we have used the definition of ∂1 , ∂0 respectively and in the
second equality, the linearity of ∂0 . Thus, we have that ∂0 ∂1 = 0 and in other words B0 ⊂ Z0 . The
same can also be deduced by noting that as matrix product ∂0 ∂1 is nothing but sum of rows of ∂1 and
hence 0.
Now, if we can understand r(B0 ) then we can understand all the ranks involved. Observe that B0
is nothing but the column space of of ∂1 i.e., denoting the columns of the matrix ∂1 by C1 , . . . , Cm ,
we have that
m
B0 ∼
X
={ ai Ci : ai ∈ R}.
i=1

Hence r(B0 ) is nothing but the maximum number of linearly independent column vectors.
Suppose e1 , . . . , ek denote the edges corresponding to the linearly independent columns. Assume
that the subgraph H = e1 ∪ . . . ∪ ek is connected. Let V (H) = {v1 , . . . , vl }. Consider the incidence
matrix restricted to {v1 , . . . , vl } × {e1 , . . . , ek }. Call it ∂10 . Since the non-trivial entries of ei ’s are on
v1 , . . . , vl , the columns of ∂10 are also linearly independent and so the column rank of ∂10 = k ≥ l − 1.
Let R1 , . . . , Rl be the rows of the matrix ∂10 . Since every column has exactly one +1 and one −1, we
have that i Ri = 0 and thus the row rank of ∂10 ≤ l − 1. Since the row rank = column rank, we have
P
Incidence matrix and connectivity 61

that k = l − 1 i.e., H is a tree.


Thus, we have proved that if e1 , . . . , ek denote the edges corresponding to the linearly independent
columns, then every component of e1 ∪ . . . ∪ ek is a tree i.e., e1 ∪ . . . ∪ ek is a forest or acyclic subgraph.
We show the converse and determine the basis for B0 .

Proposition 9.1.4. Let Ci be the columns corresponding to ei in ∂1 . Then e1 ∪ . . . ∪ ek is acyclic iff


C1 , . . . , Ck are linearly independent.

Proof. We have shown the ‘if’ part. We will show the ‘only if’ part. Suppose that H = e1 ∪ . . . ∪ ek
is acyclic. We will show that C1 , . . . , Ck are linearly independent by induction. The conclusion holds
trivially for k = 1. Suppose it holds for all l < k. WLOG assume that ek is a leaf edge with
degH (e+ +
/ e1 ∪ . . . ∪ ek−1 . If ki=1 ai Ci = 0 then since e+
P
k ) = 1. Thus ek ∈ k ∈ ek only, we have that
Pk−1
ak = 0. Thus i=1 ai Ci = 0 and since e1 ∪ . . . ∪ ek−1 are acyclic and hence by induction hypothesis,
we have that C1 , . . . , Ck−1 are linearly independent i.e., ai = 0, 1 ≤ i ≤ k.

From the previous proposition, the following theorem follows easily.

Theorem 9.1.5. 1. Let e1 ∪ . . . ∪ ek be the (maximal) spanning forest in G. Then

Xk
B0 = { ai ∂1 (ei ) : ai ∈ R}.
i=1

2. r(B0 ) = n − β0 (G), r(Z0 ) − r(B0 ) = β0 (G) − 1.

3. G is connected iff B0 = Z0 .

By the rank-nullity theorem, we have that r(Z1 ) = m − n + β0 (G) which is nothing but the number
of excess edges i.e., edges added after forming a (maximal) spanning forest. Let e1 e2 . . . , ek form a
cycle, denoting the reversal of an edge by eb, set

k


1[ei ∈ E ]ei − 1[ebi ∈ →

X
zc := ei ]ebi .
i=1

Thus, it is easy to verify that ∂1 zc = 0 and hence zc ∈ Z1 . We can extend this further.


Theorem 9.1.6. Let F be a maximal spanning forest in G and e1 , . . . , el ∈ E − E(F ) where l =
m − n + β0 (G). Let C1 , . . . , Cl be the cycles in F ∪ e1 , . . . , F ∪ el respectively. Then zC1 , . . . , zCl form
a basis for Z1 .

Remark 9.1.7. Another challenging exercise : In terms of matrices, the linear transformation ∂0 , ∂1
represent right multiplication i.e., we have that

∂ ∂ ∂ ∂−1
{0} →2 C1 →1 C0 →0 R → {0}.
62 Graphs and matrices

But we could have considered left multiplication and in which case we will have linear transformations
δi ’s as follows :
2 δ 1 0δ δ δ−1
{0} ← C1 ← C0 ← R ← {0}.

Can you compute the ranks and characterize connectivity as we did above with ∂0 , ∂1 ’s ?

See [Bapat 2010, Chapter 2] for more on incidence matrices.


We shall define the 0 − 1 incidence matrix M () as follows : M (i, j) = 1[vi ∈ ej ]. In other words,
M is the unoriented incidence matrix. The following result explains the importance of considering
orientations in the incidence matrix.

Theorem 9.1.8. Let G be a connected graph. Then r(M ) = n − 1 if G is bi-partite and r(M ) = n
otherwise.

See [Bapat 2010, Lemma 2.17] for a proof.

9.1.1 Laplacian Matrix

The Laplacian matrix plays a fundamental role in many topics within probability, graph theory,
algebraic topology and analysis. See Section 9.4 for references.
Let D = diag[deg(1), . . . , deg(n)] be the diagonal matrix with entries as the degrees of vertices.
Recall that A is the adjacency matrix defined in Definition 3.4.1. The Laplacian matrix is defined as
L = D − A. Verify that L = ∂1 ∂1T using the matrix representation of ∂1 and its transpose. Observe
that even though ∂1 is oriented, L is unoriented. Further, it is easy to see that if we think of ∂1 as
linear transformations as in the previous sections, we have that
n
X n
X X
L : C0 → C0 ; : ai vi 7→ (ai deg(vi ) − aj )vi .
i=1 i=1 j:vj ∼vi

For purposes of our analysis, we will view L primarily as a matrix. Firstly observe that r(L) =
r(∂1 ∂1t ) ≤ r(∂1 ) ∧ r(∂1t ) = r(∂1 ). Trivially, we have that Ker∂1t ⊂ KerL.
Pn
We set the notation < x, y >= x.y = xy t = yxt for x, y ∈ Rn and kxk2 =< x, x >= 2
i=1 xi .
Observe that for x ∈ Rn , we have that

xt Lx = xt ∂1 ∂1t x =< ∂1t x, ∂1t x >= k∂1t xk2 . (9.1)

Hence, if x ∈ KerL then by the above equation ∂1t x = 0 i.e., x ∈ Ker∂1t i.e., KerL = Ker∂1t by
our earlier observation. Hence n(L) := r(KerL) = r(Ker∂1t ) = n − r(∂1 ) = β0 (G). Since n(L) is the
multiplicity of the eigenvalue 0, this shows that the multiplicity of 0 is β0 (G).
From (9.1), we also have that L is a positivie semi-definite matrix and hence all its eigenvalues are
non-negative. Denote the eigenvalues by λ1 ≤ λ2 . . . ≤ λn . Summarizing our above conclusions, here
is the theorem.
Incidence matrix and connectivity 63

Theorem 9.1.9. Let l = β0 (G). We have that 0 = λ1 = . . . = λl < λl+1 ≤ . . . λn . Thus, G is


connected iff λ2 > 0.

Observe that ∂1t x = (xi − xj )(v ,v →.



i j )∈ E

Lemma 9.1.10. Let C1 , . . . , Cl be the components of G. Define x1 , . . . , xl as vectors that are constant
on each of the components respectively i.e., xik = 1[k ∈ Ci ]. Then, we have that x1 , . . . , xl form a basis
for KerL.

Proof. From the observation before the lemma, we have that k∂1t xk2 = − xj )2 and from
P
vi ∼vj (xi
(9.1), we know that if Lx = 0, ∂1t x = 0 and hence xi = xj for all vi ∼ vj . This implies that x is a
constant on each component and hence x1 , . . . , xl ∈ KerL. Since r(KerL) = l, it suffices to show that
x1 , . . . , xl are linearly independent. This follow easily as the vectors are supported on disjoint sets of
vertices.

9.1.2 Spanning trees and Laplacian

We recall the famed Cauchy-Binet formula. For an n × m matrix A and a m × n matrix B with n ≤ m,
we have that
X
det(AB) = det(A[[n]|S])det(B[S|[n]]),
S⊂[m],|S|=n

where A[T |S] refers to the matrix A restricted to rows in T and columns in S. Let W = V − {1}
and we can apply the Cauchy-Binet formula to L[W |W ]. As L = ∂1 ∂1t , we have that L[W |W ] =

− →

∂1 [W | E ]∂1t [ E |W ].
X X
det(L[W |W ]) = det(∂1 [W |S])det(∂1t [S|W ]) = det(∂1 [W |S])2 ,

→ S⊂E:|S|=n−1
S⊂ E :|S|=n−1

where the last equality is due to the fact that ∂1 [W |S] = ∂1t [S|W ]. Since the sum of rows in ∂1 [V |S] is 0,


r(∂1 [W |S]) = r(∂1 [V |S]). Hence, if G is not connected, then r(∂1 [W | E ]) < n − 1 and det(L[W |W ]) =
0. Thus, if |E| < n − 1, the LHS and the RHS in the above equation are zero. Assume that G is
connected.


Fix S ⊂ E : |S| = n − 1. Then det(∂1 [W |S]) 6= 0 iff ∂1 [W |S] is non-singular iff ∂1 [W |S] has full
column rank iff r(∂1 [W |S]) = r(∂1 [V |S]) = n − 1 iff S is acyclic iff S is a spanning tree in G. Thus,
we have that
X
det(L[W |W ]) = det(∂1 [W |S])2 .


S⊂ E ,spanning tree
Suppose we show that det(∂1 [W |S]) ∈ {0, −1, +1}, then we have that

det(L[W |W ]) = |Spanning trees of G|. (9.2)

The following lemma completes the proof.


64 Graphs and matrices



Lemma 9.1.11. For W ⊂ V, S ⊂ E such that |W | = |S|, we have that det(∂1 [W |S]) ∈ {0, −1, +1}.

Proof. We will prove by induction on k = |W | = |S|. Let B = ∂1 [W |S]. The case of k = 1 is trivial
as the entries are 0, −1, +1. Assume that the theorem holds for all l < k for k ≥ 2.
If each column of B has both a +1 and a −1 entry, then the sum of rows is 0 and detB = 0. If there
is a column of 0’s in B, then detB = 0 again. Hence, there is a column with only a single non-zero entry
in B. Suppose this is the (w, s)-th entry. Then, we have that detB = Bw,s det(∂1 [W − {w}|S − {s}]).
By induction, the latter is 0, −1, +1 and since Bw,s = +1, −1, we obtain the desiered conclusion.

Corollary 9.1.12. Let 0 = λ1 ≤ λ2 ≤ . . . ≤ λn be the eigenvalues of L. Then the number of spanning


Q n
λi
trees of G is i=2
n .

Proof. Let ST be the number of spanning trees of G.


X X Y
n × ST = det(L[W |W ]) = λi ,
W ⊂[n],|W |=n−1 W ⊂[n],|W |=n−1 i∈W

as the sum of kthe principal minors of a matrix is equal to the sum of products of k eigenvalues.

Theorem 9.1.13. Let W ⊂ [n] and |W | = n − k. Then det(L[W |W ]) is the number of spanning
forests in G with k components and each of the elements of W c being in a distinct component.

Proof. Let W c = {w1 , . . . , wk }. Again by Cauchy-Binet, we have that


X
det(L[W |W ]) = det(∂1 [W |S])2
S⊂E:|S|=n−k

From Lemma 9.1.11, we know that det(∂1 [W |S]) = 0, −1, +1 and hence it is enough to show that
∂1 [W |S] is non-singular iff the edges of S forms a forest with k components with each wi in a distinct
component.
Let the edges of S forms a forest with k components with each wi in a distinct component, say Di .
Let the edges in the ith component be Si . Set Wi = Di − wi . We have that ∪Wi = W and ∪Si = S.
Thus ∂1 [W |S] is a block matrix with blocks ∂1 [Wi |Si ], i = 1, . . . , k. By the proof of Corollary 9.1.12,
we have that det(∂1 [Wi |Si ]) ∈ {−1, +1} as (Di , Si ) forms a trees and hence det(∂1 [W |S]) ∈ {−1, +1}
If ∂1 [W |S] is non-singular, then the columns of S in ∂1 [V |S] are linearly independent and hence
the edges in S form a forest and they have k components as |S| = n − k. to be completed

9.2 More properties of Adjacency Matrix

A subgraph H of G is said to be elementary if every component is a cycle or an edge. Denote by


c(H) and c1 (H) the number of cycle components and edge components in a subgraph H respectively.
More properties of Adjacency Matrix 65

Theorem 9.2.1.
X
detA = (−1)n−c1 (H)−c(H) 2c(H) .
H:H is a sp. el. subgraph
Proof. Denoting the symmetric group of permutations by Sn , we have that

X n
Y
n−|π|
detA = (−1) Aπ ; Aπ := Ai,π(i) ,
π∈Sn i=1

and |π| is the number of cycles in the cycle decomposition of π. Define the graph Hπ = {i ∼ j : π(i) =
j}. Aπ ∈ {0, 1}. We shall sketch the proof steps here and refer to [Bapat 2010, Theorem 3.8] for the
details.
Firstly, Aπ = 1 iff π(i) 6= i for any i iff Hπ is a elementary subgraph with the components in Hπ
corresponding to the cycles in π. Secondly, for an elementary subgraph H

|{π ∈ Sn : Hπ = H}| = 2c(H) ,

as reversing the orientation in a cycle of π still yields that Hπ = H. Finally, since each cycle in π
corresponds to a component in Hπ , we have that |π| = c(H) + c1 (H). Thus, the proof is complete by
the determinantal formula above.

Recall that the characteristic polynomial is φλ (A) = det(λI − A) = ni=0 ci λn−i . Note that c0 = 1.
P

Further, we have that ck = (−1)k W ⊂[n],|W |=k det(A[W |W ]). Observe that A[W |W ] = A(HW ) where
P

HW is the induced subgraph on W . Thus by Theorem 9.2.1, we have that


X
det(A(HW )) = (−1)k−c1 (H)−c(H) 2c(H)
H⊂HW :H el. sp.

and so
X
ck = (−1)k (−1)c1 (H)+c(H) 2c(H) .
H:H el. subgraph, |V (H)| = k
Trivially, c1 = 0.
Suppose that there is c3 = . . . = c2k−1 = 0. for some k ≥ 1.
If there is a cycle of length 3, then c3 6= 0 as every elementary subgraph on 3 vertices is a cycle.
Since c3 6= 0, there is no cycle of length 3. If there is a cyle of length 5, then there is an induced cycle
of length 3 or 5 but since there is no cycle of length 3, we have only induced cycle of length 5. Again,
since c3 = 0, all elementary subgraphs on 5 vertices are induced 5-cycle and so c5 6= 0, a contradiction.
Thus there are no induced 5-cycles.
Similarly, if there is a cycle of odd-length l, there is an induced cycle of odd-length at most l. But
recursively applying the above argument, there are no induced odd-length cycles of length strictly
smaller than l and hence the induced cycle is of length l. But this yields that cl 6= 0, a contradiction
for l ≤ 2k − 1. Thus if c3 = . . . = c2k−1 = 0, there are no odd-cycles of length at most 2k − 1.
66 Graphs and matrices

Now, if H is a elementary subgraph on 2k + 1, then there exists an odd-cycle of length at most


(2k + 1). By the previous paragraph, we have that there are no odd-cycles of length at most 2k − 1.
Hence the odd-cyle is of length 2k + 1 and hence the number of induced (2k + 1)-cycles is − 21 c2k+1 .
Thus, we get the following theorem characterizing bi-partitness.

Theorem 9.2.2. TFAE :

1. G is bipartite.

2. c2k+1 = 0, k = 0, 1, . . .

3. A has symmetric spectrum i.e., if λ is an e.v. of A with multiplicity k, so is −λ.

Proof. The obervations before the theorem prove that (i) is equivalent to (ii). To show (ii) is equivalent
to (iii), observe that (ii) implies that φA (λ) = m 2
Q
i=1 (λ − ai ) as φA (λ) has no odd coefficients. So, the
+ √
eigenvalues are − ai and since the eigenvalues are all real due to symmetry of A, we have that the
spectrum of A is symmetric.
Qm 2
Conversely, if the spectrum of A is symmetric then φA (λ) = i=1 (λ − a2i ) for some ai ∈ R and
thus φA (λ) has no odd coefficients.

9.3 Exercises
1. Let G be a graph with incidence matrix ∂1 and let B = (bij )i,j≤k be a (k × k)-submatrix of ∂1
which is nonsingular. Show that there is precisely one permutation σ of 1, . . . , k such that the
product ki=1 biσi is non-zero.
Q

2. Let δi = ∂iT be the transpose of the incidence matrices ∂i for i = 0, 1. Show that δi defines a
linear map from Ci−1 to Ci where C−1 = F, the underlying field. Further show that δ1 ◦ δ0 = 0.
Describe the spaces Im(δi ), Ker(δi ) for i = 0, 1. Can you express the number of connected
components in terms of the ranks of these spaces ?

3. Let G be a graph with k components. Suppose B is a (l × l)- submatrix of the incidence matrix
∂1 . with indexed by {e1 , . . . , el } and rows by {v1 , . . . , vl }. Further, Show that B is a non-
singular matrix only if l ≤ r(∂1 ),{e1 , . . . , el } form a forest. What is the subgraph {e1 , . . . , el }
when l = r(∂1 ) ?

4. Compute the eigenvalues of the Laplacian and Adjacency matrices of the the cycle graph and
the path graph.

5. Let G be a graph with n vertices, m edges and let λ1 ≥ . . . λn be the eigenvalues of the adjacency
matrix of G. Show the following bounds for the eigenvalues
q
(a) λ1 ≤ 2m(n−1)
n
Further reading 67

(b) δ(G) ≤ λ1 ≤ ∆(G).


(c) If H is an induced subgraph on p vertices then

λn (G) ≤ λp (H) ≤ λ1 (H) ≤ λ1 (G).

6. Let G be a connected graph on n vertices and m edges. Let ∂1 be its incidence matrix under
a fixed orientation of edges. Let y = (y1 , . . . , yn )T be a (n × 1)-column vector such that for
some i 6= j ∈ [n], yi = +1, yj = −1 and yl = 0 for l ∈ [n] \ {i, j}. Show that there exists a
(m × 1)-column vector x such that ∂1 x = y. Give a graph-theoretic interpretation of the same.

7. Let λ1 ≤ . . . ≤ λn be the eigenvalues of L. Compute the eigenvalues of L + aJ for a > 0 where


J is the all 1 matrix.

8. Compute the eigenvalues of the laplacian matrix L and adjacency matrix A of the Petersen
graph. Calculate the number of spanning trees of the Petersen graph. (Hint : Show that
A2 + A − 2I = J)

9. Compute the eigenvalues of L(G × H) in terms of L(G) and L(H) where G × H is the cartesian
product of G and H.

9.4 Further reading


The books of [Bapat 2010] or [Godsil and Royle 2013] are excellent sources for more details on graphs
and matrices. Spectrum of the graph also plays an important role in studying ”zeta function” on
graphs (see [Terras 2010]) and uses what is known as the non-backtracking matrix of a graph. As
mentioned before, higher-dimensional topological analogues of incidence matrix can be found in the
book of [Edelsbrunner 2010]. For a more analytical connection of the Laplacian, see [Grigoryan 2018].
The Laplacian also plays a crucial role in the certain games (Dollar game or abelian sandpile model)
defined on graphs and using this one can formulate and prove Riemann-Roch theorem for graphs;
see [Corry and Perkinson 2018]. Abelian Sandpile models were originally introduced by physicists
to model certain phenomena of ‘particles organizing themselves’ and still many interesting questions
remain mathematically unproven about this model. See [Perkinson 2011] for connections between
”algebraic geometry” and ”sandpiles”. See [Bond and Levine 2013] for more general models than
abelian sandpiles known as ”Abelian networks”.
68 Graphs and matrices
Chapter 10

Planar graphs

Can we draw graphs on a paper without edges crossing each other ? We can draw K4 and so any
graph on at most 4 vertices can be drawn. What about K5 ? First, we shall clarify what we mean by
’drawing’.

Definition 10.0.1 (Planar embedding). An embedding of the graph G = (V, E) in the plane is a pair
of functionf f, fe , e ∈ E such that

• f : V → R2 is an injection.

• ∀e = (u, v) ∈ E, fe : [0, 1] → R2 is a continuous simple path such that fe (0) = f (u), fe (1) =
f (v), fe ([0, 1]) ∩ f (V ) = {f (u), f (v)}.

• for e 6= e0 , we have that fe ([0, 1]) ∩ fe0 ([0, 1]) = f (e ∩ e0 ) or equivalently fe ((0, 1) ∩ fe0 ((0, 1)) = ∅.

Since we are concerned with finite graphs, we shall assume that all fe ’s are polygonal line segments
i.e., union of finitely many straight lines.

We shall never specify fe directly but more via drawings. A plane graph is a graph G with
its embedding f, fe , e ∈ E. We shall view plane graphs as a subset of R2 by identifying G with
f (V ) ∪ ∪e∈E fe ([0, 1]) ⊂ R2 . We shall denote fe ([0, 1]) by e and fe ((0, 1)) by e̊. A graph isomorphic to
a plane graph is called a planar graph. We refer to [Diestel 2000, Chapter 4] for various topological and
geometric facts that shall be used in some of the proofs as well as more details. The most important
of these is the Jordan curve theorem.

Definition 10.0.2 (Faces). Let G be a plane graph. R2 − G is an open set and consists of finitely
many connected components. Each connected component is called a face.

Here we list some properties of faces.

Proposition 10.0.3. 1. For any face F and an edge e, e̊ ∩ ∂F = ∅ or e ⊂ ∂F . Thus we have that
∂F = ∪e:e̊∩∂F 6=∅ e.

69
70 Planar graphs

2. If H ⊂ G as a plane graph and F is a face of G, then there is a face F 0 of H such that F ⊂ F 0 .

3. Assuming as above, if ∂F ⊂ H then ∂F = ∂F 0 .

4. An edge e belongs to a cycle iff e ∈ ∂F1 ∩ ∂F2 for two distinct faces F1 , F2 .

5. An edge e is a cut-edge iff there exists a unique face F such that e ∈ ∂F .

A corollary of the above proposition is that a forest has only one face. Further, we define the
length of a face F as
X
l(F ) := (1[e is in a cycle] + 21[e is a cut-edge]).
e∈∂F

One can define a notion of ’traversal’ (i.e., a closed walk) of a face such that the length of the closed
walk is the length of the face.
P
Exercise 10.0.4. F l(F ) = 2|E|.

10.1 Euler’s formula and Kuratowski’s theorem


Theorem 10.1.1 (Euler’s formula). Let G be a plane graph with v vertices, e edges and f faces. Then
v − e + f = 1 + β0 (G) where recall that β0 (G) is the number of connected components.

Proof. The proof proceeds by showing the formula for forests and then inductively verifying it for
connected graphs. For forest, observe that there is only one face because of the definition of the length
of a face, that every edge is a cut-edge, v − e = β0 (G) and Exercise 10.0.4.
If the formula holds for trees, then one can show that if G, H are two disjoint plane graphs then
there exists u ∈ G, v ∈ H such that G0 = G ∪ H ∪ (u, v) is also a plane graph with the number of faces
being f 0 (G) = f (G) + f (H) − 1.

Lemma 10.1.2. Suppose G is a connected plane graph with v ≥ 3. Then e ≤ 3v − 6. Further if G


has no triangles then e ≤ 2v − 4.

Proof. Note that if ∂F corresponds to a cycle, l(F ) ≥ 3 and if not l(F ) ≥ 4 as v ≥ 3 and G is
connected. Thus, we have by Euler’s formula that
X
3(2 − v + e) = 3f ≤ l(F ) = 2e
F

and so e ≤ 3v − 6. Now, if G has no triangles, then l(F ) ≥ 4 for all F and the above inequality yields
that e ≤ 2v − 4.

Using the above lemma, we can show that K3,3 and K5 are not planar. A direct constructive proof
using some geometric arguments are also possible for non-planarity of K3,3 and K5 .
Coloring of Planar graphs 71

We say H is a minor of a graph G if H is obtained from G by a sequence of the following


operations : (i) Edge-deletion. (ii) Vertex-deletion or (iii) edge-contraction. The three operations
preserve planarity (see Section 10.4.
Now, we shall state without proof one of the important theorems.

Theorem 10.1.3 (Kuratowski’s theorem). A graph if planar iff it has no K3,3 or K5 minor.

A graph is a triangulation if it is connected and l(F ) = 3 for every face F . A planar graph is said
to be maximal if addition of any edge makes it non-planar.

Lemma 10.1.4. G is a plane graph with v ≥ 3. TFAE

1. e = 3v − 6 and G is connected

2. G is a triangulation.

3. G is a maximal planar graph.

Proof. The equivalence of (i) and (ii) is by noting that the inequality in the proof of Lemma 10.1.2 is
an equality in this case.

Exercise 10.1.5. Show that (ii) and (iii) equivalent in the above lemma.

10.2 Coloring of Planar graphs


Lemma 10.2.1. Every planar graph has a vertex of degree at most 5.

Proof. If every vertex has degree at least 6, then 2e ≥ 6v which contradicts Lemma 10.1.2.

Using the above lemma and induction, one can prove that a planar graph is 5-colorable. But we
can do better with a little more arguments.

Theorem 10.2.2. Every planar graph is 5-colorable.

Proof. The theorem is trivially true for graphs with at most 5 vertices. By induction, we assume that
the theorem holds for all graphs with at most n vertices.
Let G be a planar graph with n + 1 vertices. By Lemma 10.2.1, there is a vertex v of degree at
most 5. Choose the same.

CASE 1 : If deg(v) < 5, then by induction G − v is 5-colorable and we can assign v a color not
assigned to any of its (at most 4) neighbours.

CASE 2 : Suppose deg(v) = 5. Let N (v) = {v1 , . . . , v5 }. Since K5 is not planar, v1 , . . . , v5 cannot
form a complete graph and hence WLOG, assume that v1  v2 . Construct a new planar graph G0 by
contracting the edge (v1 , v) and denoting the new vertex by v 0 . Further construct a planar graph G00
72 Planar graphs

by contracting the edge (v 0 , v2 ) and denoting the new vertex v 00 . Since G00 has n − 1 vertices and is
planar, it is 5-colorable.
To construct a 5-coloring on G : Assign the same colors as in G00 to all vertices in G except v, v1 , v2 .
Further assign the color of v 00 to v1 and v2 . Now, N (v) has been colored using only 4 colors and so
assign the 5th color to v.

10.3 ***Graph dual****


Let G be a plane graph. Construct a new (multi-) plane graph G∗ called the dual as follows : Choose
xi ∈ F̊i for all faces Fi ’s. For all e ∈ ∂Fi ∩ ∂Fj for i 6= j, draw an edge between xi and xj . If e ∈ ∂Fi
is a cut-edge, we draw a loop at xi . We can draw G∗ as a plane graph as follows : Connect xi to
the mid-point of all edges e such that e ∈ ∂Fi such that paths to no two edges intersect. Thus, if
e ∈ ∂Fi ∩ ∂Fj for i 6= j then xi and xj are connected to mid-points of e. If e is a cut-edge, draw two
non-intersecting edges to mid-point of e.
For example, see the following figure (10.3) for a graph (blue) and its dual (in red). Two isomorphic

Figure 10.1: A graph and its dual

graphs can have different duals. See Figure 10.3

Exercise 10.3.1. Show that the following are equivalent for a plane graph G.

1. G is bi-partite

2. l(F ) is even for all F .

3. The dual graph G∗ is Eulerian.


Exercises 73

Figure 10.2: Two isomorphic graphs with same dual

Hint : Show that dG∗ (xi ) = l(Fi ).

10.4 Exercises
1. Merging two adjacent vertices of a planar graph yields another planar graph.

2. Any embedding of a planar graph will have the same number of faces.

3. Show that any connected triangle-free planar graph has at least one vertex of degree three or
less. Prove by induction on the number of vertices that any connected triangle-free planar graph
is 4-colorable.

4. Show that every planar graph with at least 4 vertices has at least 4 vertices of degree less than
or equal to 5. (Hint : Consider a maximal planar graph.)

Additional Exercises : (not in syllabus or Tutorials)

1. Let G be a plane graph and G∗ be its dual. Prove the following :

• G∗ is connected.
• If G is connected, then each face of G∗ contains exactly one vertex of G.
74 Planar graphs

• (G∗ )∗ = G iff G is connected.

2. Prove that a set of edges T ⊂ E(G) in a connected plane graph form a spanning tree iff the
duals of the edges E(G) − T form a spanning tree of G∗ .

3. Prove that every n-vertex self-dual plane graph has 2n − 2 edges. For all n ≥ 4, construct a
simple n-vertex self-dual plane graph.

10.5 Further Reading


For more on planar graphs, see [Van Lint and Wilson, Chapter 33] and for graph embeddings on
surfaces, see [Van Lint and Wilson, Chapter 35].
Bibliography

[Addario-Berry 2015] Addario-Berry L. ”Partition functions of discrete coalescents: from Cayley’s formula to Frieze’s
ζ(3) limit theorem.” In XI Symposium on Probability and Stochastic Processes. (2015) (pp. 1-45). Birkhäuser, Cham.

[Aigner et al. 2010] Aigner, M., Ziegler, G. M., Hofmann, K. H., & Erdos, P. (2010). Proofs from the Book (Vol. 274).
Berlin: Springer.

[Alon and Spencer 2004] Alon, N., and Spencer, J. H. (2004). The probabilistic method. John Wiley and Sons.

[Babai 1992] Babai, László, and Péter Frankl. Linear Algebra Methods in Combinatorics: With Applications to Geom-
etry and Computer Science. Department of Computer Science, univ. of Chicag, 1992.

[Baker 2015] Baker, M. (2015). Hodge theory in combinatorics. Hodgetheoryincombinatorics.

[Baker 2018] Baker, M. (2018). Hodge theory in combinatorics. Bulletin of the American Mathematical Society, 55(1),
57-80.

[Bapat 2010] Bapat, R. B. (2010). Graphs and matrices (Vol. 27). London: Springer.

[Bauerschmidt et al. 2012] Bauerschmidt, Roland, Hugo Duminil-Copin, Jesse Goodman, and Gordon Slade. ”Lectures
on self-avoiding walks.” Probability and Statistical Physics in Two and More Dimensions (D. Ellwood, CM Newman,
V. Sidoravicius, and W. Werner, eds.), Clay Mathematics Institute Proceedings 15 (2012): 395-476.

[Bertsimas 1990] Bertsimas, Dimitris J., and Garrett Van Ryzin. ”An asymptotic determination of the minimum span-
ning tree and minimum matching constants in geometrical probability.” Operations Research Letters 9, no. 4 (1990):
223-231.

[Bollobas 2013] B. Bollobás, Modern graph theory, volume 184, (2013), Springer Science & Business Media.

[Bond and Levine 2013] B. Bond and L. Levine. (2013) Abelian Networks : Foundations and Examples,
arXiv:1309.3445v1.

[Clay and Margalit 2017] Clay, Matt, and Dan Margalit, eds. Office Hours with a Geometric Group Theorist. Princeton
University Press, 2017.

[Choudum 1986] Choudum, S. A. ”A simple proof of the Erdos-Gallai theorem on graph sequences.” Bulletin of the
Australian Mathematical Society 33, no. 1 (1986): 67-70.

[Corry and Perkinson 2018] S. Corry and D. Perkinson. Divisors and Sandpiles: An Introduction to Chip-Firing, Volume
114, (2018), American Mathematical Soc.

[Diestel 2000] R. Diestel, Graph theory, (2000), Springer-Verlag Berlin and Heidelberg GmbH.

75
76 Bibliography

[Edelsbrunner 2010] Edelsbrunner, H., & Harer, J. (2010). Computational topology: an introduction. American Math-
ematical Soc..

[Frieze 1985] Frieze, Alan M. ”On the value of a random minimum spanning tree problem.” Discrete Applied Mathe-
matics 10, no. 1 (1985): 47-56.

[Frieze and Pegden 2017] Frieze, Alan, and Wesley Pegden. ”Separating subadditive Euclidean functionals.” Random
Structures & Algorithms 51, no. 3 (2017): 375-403.

[Gale and Shapley 1962] Gale, David, and Lloyd S. Shapley. ”College admissions and the stability of marriage.” The
American Mathematical Monthly 69, no. 1 (1962): 9-15.

[Godsil and Royle 2013] Godsil, C., and Royle, G. F. (2013). Algebraic graph theory (Vol. 207). Springer Science &
Business Media.

[Grigoryan 2018] Grigoryan, A. (2018). Grigor’yan, A. (2018). Introduction to Analysis on Graphs (Vol. 71). American
Mathematical Soc..

[Grochow 2019] Grochow, Joshua. ”New applications of the polynomial method: The cap set conjecture and beyond.”
Bulletin of the American Mathematical Society 56, no. 1 (2019): 29-64.

[Guth 2016] Guth, Larry. Polynomial methods in combinatorics. Vol. 64. American Mathematical Soc., 2016.

[Jukna 2011] Jukna, Stasys. Extremal combinatorics: with applications in computer science. Springer Science & Business
Media, 2011.

[Krishnapur 2019] Krishnapur, M. Topics in Analysis : Lecture Notes. http://math.iisc.ac.in/~manju/TA2019/


Topicsinanalysis2019.pdf, 2019.

[Lovasz 2011] Lovász, László. ”Graph Theory Over 45 Years.” In An Invitation to Mathematics, pp. 85-95. Springer,
Berlin, Heidelberg, 2011.

[Pitman 1999] Pitman, Jim. ”Coalescent random forests.” Journal of Combinatorial Theory, Series A 85, no. 2 (1999):
165-193.

[Munkres 2018] Munkres, J. R. (2018). Elements of algebraic topology. CRC Press.

[Perkinson 2011] D. Perkinson, J. Perelman and John Wilmes. Primer for the Algebraic Geometry of Sandpiles,
arXiv:1112.6163.

[Rhee 1992] Rhee, Wansoo T. ”On the travelling salesperson problem in many dimensions.” Random Structures &
Algorithms 3, no. 3 (1992): 227-233.

[Smirnov 2011] Smirnov, Stanislav. ”How do research problems compare with IMO problems?.” In An Invitation to
Mathematics, pp. 71-83. Springer, Berlin, Heidelberg, 2011.

[Spencer 1994] Spencer, J. (1994). Ten lectures on the probabilistic method (Vol. 64). SIAM.

[Stanley 2013] Stanley, Richard P. ”Algebraic combinatorics.” Springer 20 (2013): 22.

[Steele 1997] Steele, J Michael. ”Probability theory and combinatorial optimization.” vol. 69, (1997) : SIAM.

[Terras 2010] Terras, A. (2010). Zeta functions of graphs: a stroll through the garden (Vol. 128). Cambridge University
Press.
Bibliography 77

[Tripathi 2010] Tripathi, Amitabha, Sushmita Venugopalan, and Douglas B. West. ”A short constructive proof of the
Erdős–Gallai characterization of graphic lists.” Discrete Mathematics 310, no. 4 (2010): 843-844.

[Van Lint and Wilson] van Lint, Jacobus Hendricus and Wilson, Richard Michael A course in combinatorics, 2001,
Cambridge university press.

[West 2001] D. B. West, Introduction to graph theory, Volume 2 (2001), Prentice hall Upper Saddle River.

You might also like