STUDENT MATHEMATICAL LIBRARY
Volume 90
Discrete
Morse Theory
Nicholas A. Scoville
STUDENT MATHEMATICAL LIBRARY
Volume 90
Discrete
Morse Theory
Nicholas A. Scoville
Editorial Board
Satyan L. Devadoss John Stillwell (Chair)
Rosa Orellana Serge Tabachnikov
2010 Mathematics Subject Classification. Primary 55U05, 58E05, 57Q05,
57Q10.
For additional information and updates on this book, visit
www.ams.org/bookpages/stml-90
Library of Congress Cataloging-in-Publication Data
Cataloging-in-Publication Data has been applied for by the AMS.
See http://www.loc.gov/publish/cip/.
Copying and reprinting. Individual readers of this publication, and nonprofit
libraries acting for them, are permitted to make fair use of the material, such as to
copy select pages for use in teaching or research. Permission is granted to quote brief
passages from this publication in reviews, provided the customary acknowledgment of
the source is given.
Republication, systematic copying, or multiple reproduction of any material in this
publication is permitted only under license from the American Mathematical Society.
Requests for permission to reuse portions of AMS publication content are handled
by the Copyright Clearance Center. For more information, please visit www.ams.org/
publications/pubpermissions.
Send requests for translation rights and licensed reprints to reprint-permission
@ams.org.
c 2019 by the American Mathematical Society. All rights reserved.
Printed in the United States of America.
∞ The paper used in this book is acid-free and falls within the guidelines
established to ensure permanence and durability.
Visit the AMS home page at https://www.ams.org/
10 9 8 7 6 5 4 3 2 1 24 23 22 21 20 19
J.M.J.
Contents
Preface ix
Chapter 0. What is discrete Morse theory? 1
§0.1. What is discrete topology? 2
§0.2. What is Morse theory? 9
§0.3. Simplifying with discrete Morse theory 13
Chapter 1. Simplicial complexes 15
§1.1. Basics of simplicial complexes 15
§1.2. Simple homotopy 31
Chapter 2. Discrete Morse theory 41
§2.1. Discrete Morse functions 44
§2.2. Gradient vector fields 56
§2.3. Random discrete Morse theory 73
Chapter 3. Simplicial homology 81
§3.1. Linear algebra 82
§3.2. Betti numbers 86
§3.3. Invariance under collapses 95
Chapter 4. Main theorems of discrete Morse theory 101
v
vi Contents
§4.1. Discrete Morse inequalities 101
§4.2. The collapse theorem 111
Chapter 5. Discrete Morse theory and persistent homology 117
§5.1. Persistence with discrete Morse functions 117
§5.2. Persistent homology of discrete Morse functions 134
Chapter 6. Boolean functions and evasiveness 149
§6.1. A Boolean function game 149
§6.2. Simplicial complexes are Boolean functions 152
§6.3. Quantifying evasiveness 155
§6.4. Discrete Morse theory and evasiveness 158
Chapter 7. The Morse complex 169
§7.1. Two definitions 169
§7.2. Rooted forests 177
§7.3. The pure Morse complex 179
Chapter 8. Morse homology 187
§8.1. Gradient vector fields revisited 188
§8.2. The flow complex 195
§8.3. Equality of homology 196
§8.4. Explicit formula for homology 199
§8.5. Computation of Betti numbers 205
Chapter 9. Computations with discrete Morse theory 209
§9.1. Discrete Morse functions from point data 209
§9.2. Iterated critical complexes 220
Chapter 10. Strong discrete Morse theory 233
§10.1. Strong homotopy 233
§10.2. Strong discrete Morse theory 242
§10.3. Simplicial Lusternik-Schnirelmann category 249
Bibliography 257
Contents vii
Notation and symbol index 265
Index 267
Preface
This book serves as both an introduction to discrete Morse theory and
a general introduction to concepts in topology. I have tried to present
the material in a way accessible to undergraduates with no more than
a course in mathematical proof writing. Although some books such as
[102, 132] include a single chapter on discrete Morse theory, and one
[99] treats both smooth and discrete Morse theory together, no book-
length treatment is dedicated solely to discrete Morse theory. Discrete
Morse theory deserves better: It serves as a tool in applications as varied
as combinatorics [16, 41, 106, 108], probability [57], and biology [136].
More than that, it is fascinating and beautiful in its own right. Discrete
Morse theory is a discrete analogue of the “smooth” Morse theory de-
veloped in Marston Morse’s 1925 paper [124], but it is most popularly
known via John Milnor [116]. Fields medalist Stephen Smale went so
far as to call smooth Morse theory “the single greatest contribution of
American mathematics” [144]. This beauty and utility carries over to
the discrete setting, as many of the results, such as the Morse inequali-
ties, have discrete analogues. Discrete Morse theory not only is topolog-
ical but also involves ideas from combinatorics and linear algebra. Yet it
is easy to understand, requiring no more than familiarity with basic set
theory and mathematical proof techniques. Thus we find several online
introductions to discrete Morse theory written by undergraduates. For
ix
x Preface
example, see the notes of Alex Zorn for his REU project at the University
of Chicago [158], Dominic Weiller’s bachelor’s thesis [150], and Rachel
Zax’s bachelor’s thesis [156].
From a certain point of view, discrete Morse theory has its founda-
tions in the work of J. H. C. Whitehead [151, 152] from the early to mid-
20th century, who made the deep connection between simple homotopy
and homotopy type. Building upon this work, Robin Forman published
the original paper introducing and naming discrete Morse theory in 1998
[65]. His extremely readable A user’s guide to discrete Morse theory is still
the gold standard in the field [70]. Forman published several subsequent
papers [66, 68–71] further developing discrete Morse theory. The field
has burgeoned and matured since Forman’s seminal work; it is certainly
established enough to warrant a book-length treatment.
This book further serves as an introduction, or more precisely a first
exposure, to topology, one with a different feel and flavor from other
introductory topology books, as it avoids both the point-set approach
and the surfaces approach. In this text, discrete Morse theory is applied
to simplicial complexes. While restriction to only simplicial complexes
does not expose the full generality of discrete Morse theory (it can be de-
fined on regular CW complexes), simplicial complexes are easy enough
for any mathematically mature student to understand. A restriction to
simplicial complexes is indeed necessary for this book to act as an expo-
sure to topology, as knowledge of point-set topology is required to un-
derstand CW complexes. The required background is only a course in
mathematical proofs or an equivalent course teaching proof techniques
such as mathematical induction and equivalence relations. This is not a
book about smooth Morse theory either. For smooth Morse theory, one
can consult Milnor’s classic work [116] or a more modern exposition in
[129]. A discussion of the relations between the smooth and discrete
versions may be found in [27, 29, 99].
One of the main lenses through which the text views topology is ho-
mology. A foundational result in discrete Morse theory consists of the
(weak) discrete Morse inequalities; it says that if 𝐾 is a simplicial com-
plex and 𝑓 ∶ 𝐾 → ℝ a discrete Morse function with 𝑚𝑖 critical simplices
of dimension 𝑖, then 𝑏𝑖 ≤ 𝑚𝑖 where 𝑏𝑖 is the 𝑖th Betti number. To prove
this theorem and do calculations, we use 𝔽2 -simplicial homology and
Preface xi
build a brief working understanding of the necessary linear algebra in
Chapter 3. Chapter 1 introduces simplicial complexes, collapses, and
simple homotopy type, all of which are standard topics in topology.
Any book reflects the interests and point of view of the author. Com-
bining this with space considerations, I have regrettably had to leave
much more out than I included. Several exclusions are worth mention-
ing here. Discrete Morse theory features many interesting computa-
tional aspects, only a few of which are touched upon in this book. These
include homology and persistent homology computations [53, 80, 82],
matrix factorization [86], and cellular sheaf cohomology computations
[48]. Mathematicians have generalized and adapted discrete Morse the-
ory to various settings. Heeding a call from Forman at the end of A user’s
guide to discrete Morse theory [70], several authors have extended discrete
Morse theory to certain kinds of infinite objects [8, 10, 12, 15, 105]. Dis-
crete Morse theory has been shown to be a special case [155] of Bestvina-
Brady discrete Morse theory [34, 35] which has extensive applications in
geometric group theory. There is an algebraic version of discrete Morse
theory [87, 102, 142] involving chain complexes, as well as a version for
random complexes [130]. E. Minian extended discrete Morse theory to
include certain collections of posets [118], and B. Benedetti developed
discrete Morse theory for manifolds with boundary [28]. There is also
a version of discrete Morse theory suitable for reconstructing homotopy
type via a certain classifying space [128]. K. Knudson and B. Wang have
recently developed a stratified version of discrete Morse theory [100].
The use of discrete Morse theory as a tool to study other kinds of mathe-
matics has proved invaluable. It has been applied to study certain prob-
lems in combinatorics and graph theory [16, 41, 49, 88, 106, 108] as well
as configuration spaces and subspace arrangements [60, 122, 123, 139].
It is also worth noting that before Forman, T. Banchoff also developed
a discretized version of Morse theory [17–19]. This, however, seemed
to have limited utility. E. Bloch found a relationship between Forman’s
discrete Morse theory and Banchoff’s [36].
I originally developed these ideas for a course in discrete Morse the-
ory taught at Ursinus College for students whose only prerequisite was
a proof-writing course. An introductory course might cover Chapters 1–
5 and Chapter 8. For additional material, Chapters 6 and 9 are good
xii Preface
choices for a course with students who have an interest in computer
science, while Chapters 7 and 10 are better for students interested in
pure math. Some of the more technical proofs in these chapters may
be skipped. A more advanced course could begin at Chapter 2 and cover
the rest of the book, referring back to Chapter 1 when needed. This book
could also be used as a supplemental text for a course in algebraic topol-
ogy or topological combinatorics, an independent study, or a directed
study, or as the basis for an undergraduate research project. It is also in-
tended for research mathematicians who need a primer in or reference
for discrete Morse theory. This includes researchers in not only topology
but also combinatorics who would like to utilize the tools that discrete
Morse theory provides.
Exercises and Problems
The structure of the text reflects my philosophy that “mathematics is
not a spectator sport” and that the best way to learn mathematics is to ac-
tively do mathematics. Scattered throughout each chapter are tasks for
the reader to work on, labeled “Exercise” or “Problem.” The distinction
between the two is somewhat artificial. The intent is that an Exercise is
a straightforward application of a definition or a computation of a sim-
ple example. A Problem is either integral to understanding, necessary
for other parts of the book, or more challenging. The level of difficulty
of the Problems can vary substantially.
A note on the words “easy,” “obvious,” etc.
In today’s culture, we often avoid using words such as “easily,”
“clearly,” “obviously,” and the like. It is thought that these words can
be stumbling blocks for readers who do not find it clear, causing them
to become discouraged. For that reason, I have attempted to avoid using
these words in the text. However, the text is not completely purged of
such words, and I would like to convey what I mean when I use them.
I often tell my students that a particular mathematical fact is “easy but
it is very difficult to see that it is easy.” By this I mean that one may
need to spend a significant amount of time struggling to understand the
meaning of the claim before it “clicks.” So when the reader sees words
like “obviously,” she should not despair if it is not immediately obvious
Preface xiii
to her. Rather, the word is an indication that should alert the reader to
write down an example, rewrite the argument in her own words, or stare
at the definition until she gets it.
Erratum
A list of typos, errors, and corrections for the book will be kept at
http://webpages.ursinus.edu/nscoville/DMTerratum.html.
Acknowledgements
Many people helped and supported my writing of this book. Ranita
Biswas, Sebastiano Cultrera di Montesano, and Morteza Saghafian
worked through the entire book and offered detailed comments and sug-
gestions for improvement. Steven Ellis, Mark Ellison, Dominic Klyve,
Max Lin, Simon Rubinstein-Salzedo, and Jonathan Webster gave helpful
mathematical feedback and caught several typos and errors. I am grate-
ful for the help of Kathryn Hess, Chris Sadowski, Paul Pollack, Lisbeth
Fajstrup, Andy Putman, and Matthew Zaremsky for answering specific
questions, as well as Dana Williams and Ezra Miller for their help devel-
oping the index. A very special thanks to Pawel Dłotko, Mimi Tsuruga,
Chris Tralie, and David Millman for their help with Chapter 9. Of course,
any errors in the book are solely mea culpa, mea culpa, mea maxima
culpa. I would like to thank Kevin Knudson for his encouragement and
support of this book, as well as Ursinus College, the Mathematics and
Computer Science department, and all my students for supporting this
project. I would like to express my appreciation to Ina Mette at AMS for
her help and patience throughout the submission and publication pro-
cess, as well as several anonymous reviewers. My family has provided
amazing support for this project, through both encouragement and feed-
back. I am grateful for edits and suggestions from my maternal cousin
Theresa Goenner and my paternal cousin Brooke Chantler who resisted
the urge to change all the spellings to reflect the Queen’s English. A spe-
cial thanks to both my father Joseph Scoville and my aunt Nancy Mans-
field, neither of whom has taken a course in calculus, but who read ev-
ery single word of this book and gave detailed feedback which greatly
improved its readability. Finally, this project could not have been com-
pleted without the love and support of my family: my children Gianna,
xiv Preface
Aniela, Beatrix, Felicity, and Louisa-Marie, and most especially my wife
Jennifer, mio dolce tesoro.
Nick Scoville
Feast of Louis de Montfort
Chapter 0
What is discrete Morse
theory?
This chapter serves as a gentle introduction to the idea of discrete Morse
theory and hence is free from details and technicalities. The reader who
does not need any motivation may safely skip this chapter. We will in-
troduce discrete Morse theory by looking at its two main components:
discrete topology and classical Morse theory. We then combine these
two ideas in Section 0.3 to give the reader a taste of discrete Morse the-
ory. First we ought to address the question “What is topology?” I’m glad
you asked!
Topologists study the same objects you might study in geometry,
as opposed to algebra or number theory. In algebra you study mostly
equations, and in number theory you study integers. In geometry, you
study mostly geometric objects: lines, points, circles, cubes, spheres, etc.
While equations and integers certainly come up in the study of geometry,
they are used only secondarily as tools to learn about the geometric ob-
jects. Topology, then, is like geometry in so far as it studies points, lines,
circles, cubes, spheres—any physical shape you can think of—and all
those objects somehow extrapolated into higher dimensions. So far so
good. But it differs from geometry in the following way: topology does
not have a concept of “distance,” but it does have one of nearness. Now,
1
2 0. What is discrete Morse theory?
I consider myself an amateur scholastic, and hence I am quite used to
the charge of having made a distinction without a difference1 . Below
we establish a distinction between distance and nearness, a distinction
that indeed makes a difference. Keep in mind, though, that not positing
a distance does not imply that distance does not exist. Rather, it may
simply mean we do not have access to that information. This will be a
key point in what follows. Finally, topology is good at both counting and
detecting holes. A hole is a somewhat nebulous beast. What exactly is
it? If you think about it, a hole is defined in terms of what isn’t there, so
it can be unclear what a hole is or how to tell if you have one. Topology
develops tools to detect holes. We will illustrate these aspects of topology
with three examples in the following section.
0.1. What is discrete topology?
We introduce discrete topology through three applications.
0.1.1. Wireless sensor networks. Sensors surround us. They are in
our cell phones, clickers, and EZ passes, to name just a few examples.
They are, however, not always found in isolation. Sometimes a collec-
tion of sensors work together for a common purpose. Take, for example,
cell phone towers. Each tower has a sensor to pick up cell phone signals.
The whole cell phone system is designed to cover the largest area pos-
sible, and to that end, the towers communicate with each other. Given
the local information that each cell phone tower provides, can we learn
whether a particular region is covered, so that no matter where you are
in the region you will have cell phone service?
1
Negare numquam, affirmare rare, distinguere semper.
0.1. What is discrete topology? 3
Consider the above system of cell phone towers. Each point repre-
sents a cell tower, and each circle surrounding the point is the tower’s
sensing area. We’ll assume that all towers have the same known sens-
ing radius along with some other technical details (see reference [51] or
[76, § 5.6]). We would like each cell phone tower to send us some infor-
mation and to then use that information to determine whether or not
we have coverage everywhere in the region. What kind of information
would we like? Perhaps each tower can tell us its location in space, its
global coordinates. If it can send us this information, then we can use
methods from geometry to tell whether or not we have coverage in the
whole region. Not terribly exciting. But what if we don’t have access to
that information? Suppose that these are cheap sensors lacking a GPS, or
that their coordinate information was lost in transmission. What can we
do then? Here is where topology is powerful. Topology makes fewer as-
sumptions as to what information we know or have access to. We make
the much weaker assumption that each tower can only tell which other
towers it is in contact with. So if tower 𝐴 has tower 𝐵 in its radius, it
4 0. What is discrete Morse theory?
communicates that information to us. But we don’t have any idea if 𝐵
is just on the boundary of 𝐴’s radius or if 𝐵 is nearly on top of 𝐴. In
other words, we know that 𝐴 and 𝐵 are “near” to each other, but we
don’t know their distance from each other. Similarly, if 𝐴 and 𝐶 do not
communicate, it could be that 𝐶 is just outside 𝐴’s sensing radius or 𝐶
is completely on the other side of the region from 𝐴. The only informa-
tion each tower tells us is what other cell towers are in its radius. Here
we reemphasize the important point mentioned above: it is not that the
distance does not exist; we just do not have access to that information.
Now, with this much more limited information, can we still determine
if we have a hole? The answer is that we sometimes can. Using the ex-
ample above, we build the communication graph, connecting any two
towers that can communicate with each other.
Next, we fill in any region formed by a triangle, which represents
three towers, any two of which can communicate with each other.
0.1. What is discrete topology? 5
Now we remove the cell tower circles.
6 0. What is discrete Morse theory?
What we are left with is a pure mathematical model of this real-life
situation. This object is called a simplicial complex, and it is this object
that we study in this book. We can take this model, analyze it using the
methods of topology, and determine that there is exactly one hole in it.
You will learn how to do this in Section 3.2. The point for now is that we
wanted to know whether or not there was a hole in a region, we assumed
we had access to very limited information, and we were able to detect
a hole by modeling our information with a topological object called a
simplicial complex.
For the details of the mathematics behind this example, see the pa-
per [51] or [76, § 5.6].
0.1.2. Tethered Roomba. A Roomba is a small disk-shaped automatic
vacuum cleaner. It is automatic in the sense that you turn it on, set it
down and leave the room, and after, say, two hours the entire room is
cleaned. While this is a promising idea, it does have a major drawback:
power. The idea of an upright cordless vacuum is still somewhat of a
novelty, and such vacuums that do exist seem not to be terribly powerful.
A vacuum must be plugged into the wall to have enough power to do a
decent job in a room of any reasonable size. Hence, a battery-powered
Roomba does not do a very good job of vacuuming the floor, or if it does,
the batteries need to be replaced often. One way around this problem
is to introduce a tethered Roomba. This is a Roomba with a cord stored
inside of it and coming out the back. The cord plugs into the wall and
remains taut in the sense that if the Roomba moves forward, the cord is
released, and if the Roomba moves backwards, the cord is retracted.
0.1. What is discrete topology? 7
This will take care of the power problem. However, it introduces
a new problem, illustrated in the above picture—namely, the cord can
easily become wrapped around a piece of furniture in the room, and the
Roomba could get stuck. In this scenario, obstacles in the room to be
avoided, such as furniture, can be thought of as holes. When the Roomba
makes a path, we want to make sure that the path does not loop around
any holes. Topology allows us to detect holes (which corresponds to
wrapping our cord around a piece of furniture) so that we can tell the
Roomba to retrace its path.
8 0. What is discrete Morse theory?
In the above picture, each piece of furniture in the room is replaced
by a hole in an object, and the problem of the Roomba wrapping its cord
around a piece of furniture becomes equivalent to detecting holes in this
object via looping around the hole. The lesson of this example is that the
size of the hole is irrelevant. It does not matter if your cord is wrapped
around a huge sectional or a tiny pole—a stuck cord is a stuck cord.
Topology, then, is not interested in the size of the hole, but the existence
of the hole.
0.1.3. Modeling a rat brain. A Swiss team of researchers is now en-
gaged in the Blue Brain Project, a study of brain function through com-
puter simulations. The team has created a digital model of part of the
somatosensory cortex of a rat on which neural activity can be simulated
[115]. The model comprises 31,000 neurons and 37 million synapses,
forming 8 million connections between the neurons. In addition to re-
searchers in neuroscience, biology, and related disciplines, the team in-
cludes mathematicians, and most important for our purposes, the team
employs experts in topology. We can imagine what an applied mathe-
matician or expert in differential equations might do in a project like this,
but what can a topologist contribute? The model of the rat brain can be
thought of as a simplicial complex. Neurons are modeled with points,
and the lines are the connections formed by synapses. Neuroscience re-
search has made clear the importance of connectivity in the brain: if two
neurons are not connected to each other, they cannot communicate di-
rectly. This connectivity tends to be directional, so that information may
be able to flow from neuron A to neuron B but not necessarily vice versa.
Information can also travel in loops, cycling around the same neurons
multiple times. Topology can reveal not only these kinds of connections
and loops, but also “higher-order notions” of connectivity, such as that
illustrated below.
0.2. What is Morse theory? 9
If we shade in each triangle, we obtain something that encloses a
void.
This has interesting topological properties (see Problem 3.21), but
does such a structure correspond to anything neurologically? Maybe
[135]! If loops tell us something, perhaps this kind of structure does
as well. The idea is for the mathematician to describe the higher-order
connectivity in this rat brain model. If they find a 12- or 15- or 32-
dimensional hole, what does this mean neurologically? If nothing else,
topology can inform the neuroscientists of the existence of highly struc-
tured regions of the brain.
0.2. What is Morse theory?
Discrete Morse theory is a discretized analogue of classical Morse the-
ory. A brief discussion of classical Morse theory, then, may help us to
understand this newer field. Classical Morse theory is named after the
American mathematician Marston Morse (1892–1977), who developed
much of the mathematics now bearing his name in the 1920s and 1930s
(e.g. [124, 125]). One important result that Morse proved using Morse
theory is that any two points on a sphere of a certain class are joined by
10 0. What is discrete Morse theory?
an infinite number of locally shortest paths or geodesics [126]. However,
it wasn’t until the 1960s that Morse theory really shot into the limelight.
In 1961, Stephen Smale (1930–) provided a solution to a famous,
unsolved problem in mathematics called the generalized Poincaré conjec-
ture in dimensions greater than 4 [143]. This was an amazing result, caus-
ing much excitement in the mathematical community. Grigori Perel-
man (1966–) generated similar excitement in 2004 when he proved the
Poincaré conjecture in dimension 3 [121]. A couple of years after Smale
proved the generalized Poincaré conjecture, he extended his techniques
to prove a result called the ℎ-cobordism theorem for smooth manifolds, a
result that gave profound insight into objects that look locally like
Euclidean space. Smale was awarded a Fields Medal, the mathemati-
cal equivalent of the Nobel Prize, for his work. In 1965, John Milnor
(1931–), another Fields Medal winner, gave a new proof of Smale’s stun-
ning result using, among other techniques, Morse theory [117]. Milnor’s
beautiful proof of Smale’s result using Morse theory cemented Morse
theory as, to quote Smale himself, “the single greatest contribution of
American mathematics” [144].
Morse theory has proved invaluable not only through its utility in
topology, but also through its applicability outside of topology. Another
one of Smale’s contributions was figuring out how to “fit Morse theory
into the scheme of dynamical systems” [37, p. 103], a branch of math-
ematics studying how states evolve over time according to some fixed
rules. Physicist and Fields Medal winner Ed Witten (1951–) developed
a version of Morse theory to study a physics phenomenon called super-
symmetry [153]. Raul Bott (1925–2005), the doctoral advisor of Smale,
relates the following anecdote about Witten’s first exposure to Morse the-
ory:
In August, 1979, I gave some lectures at Cargèse on
equivariant Morse theory, and its pertinence to the
Yang-Mills theory on Riemann surfaces. I was report-
ing on joint work with Atiyah to a group of very bright
physicists, young and old, most of whom took a rather
detached view of the lectures . . . Witten followed the
lectures like a hawk, asked questions, and was clearly
very interested. I therefore thought I had done a good
0.2. What is Morse theory? 11
job indoctrinating him in the rudiments of the half-
space approach [to Morse theory], etc., so that I was
rather nonplussed to receive a letter from him some
eight months later, starting with the comment, “Now
I finally understand Morse theory!” [37, p. 107]
So what is this Morse theory? Civil, natural, and divine law all re-
quire that the following picture of a torus (outside skin of a donut) be
included in any book on Morse theory.
How can we study this object? One strategy often employed in study-
ing anything, not just mathematics, is to break your object down into
its fundamental or simple pieces. For example, in biology, you study
the human organism by studying cells. In physics, you study matter by
breaking it down into atoms. In number theory, you study integers by
breaking them down into products of primes. What do we do in topol-
ogy? We break down objects into simple pieces, such as those shown
below. Think of them as the primes of topology.
After stretching, bending, pulling, and pushing these pieces, it turns
out that we can glue them together to get the torus back. But how did we
find these pieces? Is there a systematic way to construct them—a strat-
egy that we can use on any such object? Let’s define a “height function”
𝑓 that takes any point on the torus and yields the vertical “height” of
12 0. What is discrete Morse theory?
that point in space. Given any height 𝑧, we can then consider 𝑀𝑧 ∶=
𝑓−1 (−∞, 𝑧]. The result is a slice through the torus at height 𝑧, leav-
ing everything below level 𝑧. For example, 𝑀4 is the whole torus, while
𝑀0 , 𝑀0.5 , and 𝑀3.5 are shown below.
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
Notice that we can obtain 𝑀4 from 𝑀3.5 by gluing a “cap” onto 𝑀3.5 .
The key to detecting when this gluing occurs is to find critical points of
your height function. You may recall from calculus that a critical point
is found when the derivative (all partial derivatives in this case) is 0. Ge-
ometrically, this corresponds to a local minimum, local maximum, or
“saddle point.” With this in mind, we can identify four critical points on
the torus at heights 0, 1, 3, and 4. This simple observation is the starting
point of Morse theory. It recovers the topology of our object and gives
us information about our object, such as information about its holes.
With this in the background, Robin Forman, who received his PhD
under Bott, introduced in 1998 a discretized version of Morse theory in a
paper titled Morse theory for cell complexes [65]. Forman began his paper
by writing:
In this paper we will present a very simple discrete
Morse theory for CW complexes. In addition to prov-
ing analogues of the main theorems of Morse theory,
we also present discrete analogues of such (seemingly)
intrinsically smooth notions as the gradient vector
field and the gradient flow associated to a Morse func-
tion. Using this, we define a Morse complex, a differ-
ential complex built out of the critical points of our
0.3. Simplifying with discrete Morse theory 13
discrete Morse function which has the same homol-
ogy as the underlying manifold.
Although some of the terms may be unfamiliar, we can already see
that Forman was intending to define and extend many of the concepts
that we saw above.
0.3. Simplifying with discrete Morse theory
With our crash course in discrete topology and Morse theory behind us,
we now illustrate how the two merge. The main idea in discrete Morse
theory is to break down and study a simplicial complex in an analogous
manner to that of Morse theory. To see how, consider the following sim-
plicial version of a circle:
We can define an analogous notion of a “height function” on each
part of the simplicial complex.
3
2 2
1 1
Notice that 0 is a local minimum while 3 is a local maximum.2 But
just as before, this tells us we can build the circle out of these two pieces.
2
There are, of course, technical restrictions that any function on a simplicial complex must satisfy.
See Section 2.1 for details.
14 0. What is discrete Morse theory?
Strictly speaking, this takes us out of the realm of simplicial com-
plexes, but the idea holds nonetheless. Furthermore, we can use discrete
Morse theory to reduce the complexity of an object without losing perti-
nent information. If we were working on the cell phone system, or the
Roomba design project, or the mapping of a rat’s brain, we would em-
ploy sophisticated computers, using expensive computational resources.
Computing the number of holes tends to run on the order of a constant
times 𝑛3 , where 𝑛 is the number of lines that one inputs [55, Chapter
IV]. Hence, reducing the number of points involved can greatly save on
computational effort. For example, if we are interested in determining
whether or not there are any dead zones in the cell phone example, dis-
crete Morse theory can reduce the computational effort in Section 0.1.1
from 583 to 13 . Of course, this reduction itself does have a cost, but it still
makes the computation far more efficient. You will see how this reduc-
tion is accomplished in Chapter 8 and specifically in Problem 8.38. Such
reductions can be invaluable for huge and unwieldy data sets. For more
details, see [82]. The point for now is to realize that topology presents
interesting questions that can help us model real-world phenomena. Dis-
crete Morse theory can be an invaluable tool in assisting with such analy-
sis.
Chapter 1
Simplicial complexes
In the previous chapter, we saw three examples of applied topology
problems. We modeled these problems with objects called simplicial
complexes, made up of points, lines, faces, and higher-dimensional ana-
logues. The defining characteristics of these simplicial complexes are
given by the relationships of points, lines, etc. to one another.
1.1. Basics of simplicial complexes
One way to communicate a simplicial complex is to draw a picture. Here
is one example:
𝐶 𝐷
𝐵 𝐸
Lines between points indicate that the two points are related, and
shading in the region enclosed by three mutually related points indicates
that all three points are related. For example, 𝐶 is related to 𝐷 but 𝐶 is
not related to 𝐸. It is important to understand that relations do not have
15
16 1. Simplicial complexes
any geometrical interpretation. A concrete way to see this is to imagine
the above simplicial complex as modeling the relation “had dinner with”
among five people. So Catherine (C) has had dinner with Dominic (D),
Dominic has had dinner with Beatrix (B), and Beatrix has had dinner
with Catherine. Yet Catherine, Dominic, and Beatrix have not all had
dinner together. Contrast this with Beatrix, Catherine, and Aniela (A).
All three of them have had dinner together. This is communicated by the
fact that the region enclosed by 𝐵, 𝐶, and 𝐴 is shaded in. Notice that if
all three of them have had dinner together, this implies that necessarily
any two have had dinner together. These “face relations” are the key to
understanding simplicial complexes.
We can further imagine how any “smooth” or physical object you
can visualize can be approximated by these simplicial complexes. For
example, a sphere like
can be “approximated” or modeled by the following simplicial complex
with hollow interior:
This looks to be a fairly crude approximation. If we want a better ap-
proximation of the sphere, we would use more points, more lines, and
more faces.
1.1. Basics of simplicial complexes 17
By convention, we may fill in space only when three lines are con-
nected, not four or more. If we wish to fill in the space between four
or more lines, we need to break it up into two or more triangles. So the
following is not a simplicial complex:
But this is:
The advantage of viewing a simplicial complex by drawing a picture
in this way is that it is easy to see the relationships between all the parts.
The disadvantage is that it can be difficult to tell just from a picture what
the author is trying to convey. For example, is the volume enclosed by
the tetrahedron
filled in with a 3-dimensional “solid” or not? Moreover, we cannot draw
a picture in more than three dimensions. But at this point, we should be
able to reasonably understand the idea of building something out of 0-
simplices (points), 1-simplices (lines), and 𝑛-simplices. The formal def-
inition is given below.
Definition 1.1. Let 𝑛 ≥ 0 be an integer and [𝑣𝑛 ] ∶= {𝑣0 , 𝑣1 , … , 𝑣𝑛 } a
collection of 𝑛 + 1 symbols. An (abstract) simplicial complex 𝐾 on
[𝑣𝑛 ] or a complex is a collection of subsets of [𝑣𝑛 ], excluding ∅, such that
(a) if 𝜎 ∈ 𝐾 and 𝜏 ⊆ 𝜎, then 𝜏 ∈ 𝐾;
(b) {𝑣𝑖 } ∈ 𝐾 for every 𝑣𝑖 ∈ [𝑣𝑛 ].
The set [𝑣𝑛 ] is called the vertex set of 𝐾 and the elements {𝑣𝑖 } are called
vertices or 0-simplices. We sometimes write 𝑉(𝐾) for the vertex set
of 𝐾.
18 1. Simplicial complexes
Example 1.2. Let 𝑛 = 5 and 𝑉(𝐾) ∶= {𝑣0 , 𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 , 𝑣5 }. Define 𝐾 ∶=
{{𝑣0 }, {𝑣1 }, {𝑣2 }, {𝑣3 }, {𝑣4 }, {𝑣5 }, {𝑣1 , 𝑣2 }, {𝑣1 , 𝑣4 }, {𝑣2 , 𝑣3 }, {𝑣1 , 𝑣3 }, {𝑣3 , 𝑣4 },
{𝑣3 , 𝑣5 }, {𝑣4 , 𝑣5 }, {𝑣3 , 𝑣4 , 𝑣5 }}. It is easy to check that 𝐾 satisfies the defi-
nition of a simplicial complex. This simplicial complex may be viewed
as follows:
{𝑣2 } {𝑣3 } {𝑣5 }
{𝑣1 }
{𝑣4 } {𝑣0 }
The same simplicial complex may also be drawn as
{𝑣4 }
{𝑣5 }
{𝑣0 }
{𝑣3 }
{𝑣1 }
{𝑣2 }
In fact, there are infinitely many ways to draw this complex with dif-
ferent positions, distances, angles, etc. This illustrates a key conceptual
point. There is no concept of position, distance, angle, area, volume, etc.
in a simplicial complex. Rather, a simplicial complex only carries infor-
mation about relationships between points. In our example, 𝑣2 is related
to 𝑣3 and 𝑣3 is related to 𝑣4 , but 𝑣2 is not related to 𝑣4 . Any two of 𝑣3 , 𝑣4 ,
and 𝑣5 are related to each other, and 𝑣0 isn’t related to anything. Again,
think of the relation “had dinner with.” Either two people have had din-
ner together or they have not. Either three people have had dinner all
together or they have not. There is no geometry to the relation, only a
binary yes or no.
1.1. Basics of simplicial complexes 19
Hence, although the geometry inherent in any particular picture
can be misleading, we will nevertheless often communicate a simpli-
cial complex through a picture, keeping in mind that any geometry the
picture conveys is an accident of the particular choice of drawing.
One advantage of knowing about simplicial complexes is that they
can be used to model real-world phenomena. Hence, if one has a theory
of abstract simplicial complexes, one can immediately use it in applica-
tions. See for instance [58].
To give you some practice working with simplicial complexes, the
following exercise asks you to translate between the set definition of a
complex and the picture, and vice versa.
Exercise 1.3. (i) Let 𝑉(𝐾) = [𝑣6 ] and 𝐾 = {{𝑣0 }, {𝑣1 }, {𝑣2 },
{𝑣3 }, {𝑣4 }, {𝑣5 }, {𝑣6 }, {𝑣2 , 𝑣3 }, {𝑣3 , 𝑣5 }, {𝑣2 , 𝑣5 }, {𝑣1 , 𝑣3 },
{𝑣1 , 𝑣4 }, {𝑣3 , 𝑣4 }, {𝑣5 , 𝑣6 }, {𝑣1 , 𝑣3 , 𝑣4 }, {𝑣2 , 𝑣3 , 𝑣5 }}. Draw the simpli-
cial complex 𝐾.
(ii) Let 𝐾 be a simplicial complex on [𝑣4 ] given by
Write down the sets given by 𝐾.
Definition 1.4. A set 𝜎 of cardinality 𝑖 + 1 is called an 𝑖-dimensional
simplex or 𝑖-simplex. The dimension of 𝐾, denoted by dim(𝐾), is the
maximum of the dimensions of all its simplices. The 𝑐-vector of 𝐾 is
the vector 𝑐𝐾⃗ ∶= (𝑐0 , 𝑐1 , … , 𝑐dim(𝐾) ) where 𝑐𝑖 is the number of simplices
of 𝐾 of dimension 𝑖, 0 ≤ 𝑖 ≤ dim(𝐾). A subcomplex 𝐿 of 𝐾, denoted
by 𝐿 ⊆ 𝐾, is a subset 𝐿 of 𝐾 such that 𝐿 is also a simplicial complex. If
𝜎 ∈ 𝐾 is a simplex, we write 𝜎 for the subcomplex generated by 𝜎; that
is, 𝜎 ∶= {𝜏 ∈ 𝐾 ∶ 𝜏 ⊆ 𝜎}.1 If 𝜎, 𝜏 ∈ 𝐾 with 𝜏 ⊆ 𝜎, then 𝜏 is a face of 𝜎
and 𝜎 is a coface of 𝜏. We sometimes use 𝜏 < 𝜎 to denote that 𝜏 is a face
of 𝜎. The 𝑖-skeleton of 𝐾 is given by 𝐾 𝑖 = {𝜎 ∈ 𝐾 ∶ dim(𝜎) ≤ 𝑖}.
1
Note the careful distinction between a simplex 𝜍 and a subcomplex 𝜍.
20 1. Simplicial complexes
Example 1.5. Let 𝐾 be the simplicial complex below.
𝑣9
𝑣8
𝑣7
𝑣6 𝑣4
𝑣3 𝑣5
𝑣0 𝑣1 𝑣2
To simplify notation, we describe a simplex by concatenating its
0-simplices. For example, the 2-simplex {𝑣6 , 𝑣8 , 𝑣9 } will be written as
𝑣6 𝑣8 𝑣9 , and the 1-simplex {𝑣3 , 𝑣4 } is written 𝑣3 𝑣4 . Since the maximum
dimension of the faces is 2 (e.g. the simplex 𝑣1 𝑣4 𝑣2 ), dim(𝐾) = 2. By
counting the number of 0-, 1-, and 2-simplices, we see that the 𝑐-vector
for 𝐾 is 𝑐 ⃗ = (10, 16, 4).
The 1-skeleton is given by
𝑣9
𝑣8
𝑣7
𝑣6 𝑣4
𝑣3 𝑣5
𝑣0 𝑣1 𝑣2
Definition 1.6. Let 𝐾 be a simplicial complex. We use 𝜎(𝑖) to denote
a simplex of dimension 𝑖, and we write 𝜏 < 𝜎(𝑖) for any face of 𝜎 of
dimension strictly less than 𝑖. The number dim(𝜎) − dim(𝜏) is called
1.1. Basics of simplicial complexes 21
the codimension of 𝜏 with respect to 𝜎. For any simplex 𝜎 ∈ 𝐾,
we define the boundary of 𝜎 in 𝐾 by 𝜕𝐾 (𝜎) ∶= 𝜕(𝜎) ∶= {𝜏 ∈ 𝐾 ∶
𝜏 is a codimension-1 face of 𝜎}. A simplex of 𝐾 that is not properly con-
tained in any other simplex of 𝐾 is called a facet of 𝐾.
Example 1.7. We continue with the simplicial complex in Example 1.5,
illustrating the above definitions. The boundary of 𝑣3 𝑣4 𝑣9 is given by
𝜕𝐾 (𝑣3 𝑣4 𝑣9 ) = {𝑣3 𝑣4 , 𝑣3 𝑣9 , 𝑣4 𝑣9 }. The 1-simplex 𝑣0 𝑣6 is not contained in
any other simplex of 𝐾, so 𝑣0 𝑣6 is a facet of 𝐾. By contrast, 𝑣3 𝑣1 is not a
facet since it is contained in the larger simplex 𝑣1 𝑣3 𝑣4 . This may also be
expressed by saying that 𝑣3 𝑣1 is a face of 𝑣1 𝑣3 𝑣4 while 𝑣1 𝑣3 𝑣4 is a coface
of 𝑣3 𝑣1 . This idea may be used to communicate a simplicial complex by
considering the complex generated by a list of facets. Thus the above
simplicial complex would be
𝐾 = ⟨𝑣6 𝑣8 𝑣9 , 𝑣8 𝑣0 , 𝑣6 𝑣0 , 𝑣9 𝑣4 𝑣3 , 𝑣1 𝑣3 𝑣4 , 𝑣2 𝑣1 𝑣4 , 𝑣0 𝑣3 , 𝑣0 𝑣1 , 𝑣4 𝑣5 , 𝑣5 𝑣7 ⟩
where ⟨⋅⟩ means the simplicial complex generated by the enclosed col-
lection of simplices (see Definition 1.10). Without extra structure, this
is the most efficient way to communicate a simplicial complex. This is
how simplicial complexes are stored in files, as any good program will
take the facets and generate the simplicial complex.
Problem 1.8. Let 𝐾 be the following simplicial complex:
𝑐
𝑏 𝑒
𝑑
(i) What is the dimension of 𝐾?
(ii) List all the facets of 𝐾.
22 1. Simplicial complexes
(iii) Write down the 𝑐-vector of 𝐾.
(iv) Give an example of a subcomplex of 𝐾.
(v) Give an example of a simplex of 𝐾.
(vi) List the faces of 𝑐𝑑𝑓.
(vii) Compute 𝜕(𝑏𝑐) and 𝜕(𝑎𝑐𝑑).
(viii) Let 𝜎 = 𝑎𝑐𝑑. Find all 𝜏(0) such that 𝜏(0) < 𝜎.
Exercise 1.9. Let 𝐾 be a simplicial complex and 𝜎 ∈ 𝐾 an 𝑖-simplex of
𝐾. Prove that 𝜎 contains 𝑖+1 codimension-1 faces. In other words, prove
that |𝜕𝐾 (𝜎)| = dim(𝜎) + 1.
Definition 1.10. Let 𝐻 ⊆ 𝒫([𝑣𝑛 ])−{∅} where 𝒫([𝑣𝑛 ]) denotes the power
set of [𝑣𝑛 ]. The simplicial complex generated by 𝐻, denoted by ⟨𝐻⟩,
is the smallest simplicial complex containing 𝐻; that is, if 𝐽 is any sim-
plicial complex such that 𝐻 ⊆ 𝐽, then ⟨𝐻⟩ ⊆ 𝐽.
Exercise 1.11. Prove that if 𝐻 is a simplicial complex, then ⟨𝐻⟩ = 𝐻.
Exercise 1.12. Prove that ⟨{𝜎}⟩ = 𝜎̄ for any simplex 𝜎 ∈ 𝐾.
1.1.1. A zoo of examples. Before moving further into the theory of
simplicial complexes, we compile a list of some classical simplicial com-
plexes. These complexes will be used throughout the text.
Example 1.13. The simplicial complex Δ𝑛 ∶= 𝒫([𝑣𝑛 ]) − {∅} is the 𝑛-
simplex. Be careful not to confuse the 𝑛-simplex Δ𝑛 , which is a simpli-
cial complex, with an 𝑛-simplex 𝜎, which is an element of the complex
𝐾. While these two concepts share the same name, their meanings will
be determined by context.
In addition, we define the (simplicial) 𝑛-sphere by
𝑆 𝑛 ∶= Δ𝑛+1 − {[𝑣𝑛+1 ]}.
Problem 1.14. Draw a picture of Δ0 , Δ1 , Δ2 , Δ3 , and 𝑆 0 , 𝑆 1 , 𝑆 2 .
Example 1.15. The simplicial complex consisting of a single vertex or
point is denoted by 𝐾 = ∗. We sometimes abuse notation and write ∗ for
both the one-point simplicial complex and the one point of the simplicial
complex.
1.1. Basics of simplicial complexes 23
Example 1.16. Any simplicial complex 𝐾 such that dim(𝐾) = 1 is called
a graph. Traditionally, such simplicial complexes are denoted by 𝐺, a
convention we sometimes adopt in this text. Because they are easy to
draw, many of our examples and special cases are for graphs.
Example 1.17. Let 𝐾 be a simplicial complex and 𝑣 a vertex not in 𝐾.
Define a new simplicial complex 𝐶𝐾, called the cone on 𝐾, where the
simplices of 𝐶𝐾 are given by 𝜎 and {𝑣} ∪ 𝜎 for all 𝜎 ∈ 𝐾.
Exercise 1.18. Let 𝐾 be the simplicial complex
Draw 𝐶𝐾.
Problem 1.19. Prove that if 𝐾 is a simplicial complex, then 𝐶𝐾 is also a
simplicial complex.
Example 1.20. The torus 𝑇 2 with 𝑐-vector (9, 27, 18):
𝑣0 𝑣1 𝑣2 𝑣0
𝑣4 𝑣4
𝑣7 𝑣8
𝑣3 𝑣3
𝑣5 𝑣6
𝑣0 𝑣1 𝑣2 𝑣0
24 1. Simplicial complexes
Notice that this is a 2-dimensional representation of a 3-dimensional ob-
ject. That is, the horizontal sides are “glued” to each other and the verti-
cal sides are “glued” to each other. One can glean this from the picture
by noting that, for example, the horizontal edge 𝑣1 𝑣2 appears on both the
top and the bottom of the picture, but of course it is the same edge. Many
of the more interesting examples of simplicial complexes are represented
in this way.
Example 1.21. The projective plane 𝑃 2 with 𝑐-vector (6, 15, 10):
𝑣1
𝑣3 𝑣2
𝑣5 𝑣6
𝑣4
𝑣2 𝑣3
𝑣1
This one is more difficult to visualize. There is a good reason for
this, as the projective plane needs at least four dimensions to be properly
drawn.
1.1. Basics of simplicial complexes 25
Example 1.22. The dunce cap 𝐷 [157] with 𝑐-vector (8, 24, 17):
𝑣1
𝑣3 𝑣3
𝑣6
𝑣5 𝑣7
𝑣2 𝑣2
𝑣4
𝑣8
𝑣1 𝑣3 𝑣2 𝑣1
Example 1.23. The Klein bottle 𝒦 with 𝑐-vector (9, 27, 18):
𝑣0 𝑣1 𝑣2 𝑣0
𝑣3 𝑣4
𝑣7 𝑣8
𝑣6
𝑣4 𝑣3
𝑣5
𝑣0 𝑣1 𝑣2 𝑣0
26 1. Simplicial complexes
Example 1.24. The Möbius band ℳ with 𝑐-vector (6, 12, 6):
𝑣3 𝑣4 𝑣5 𝑣0
𝑣0 𝑣1 𝑣2 𝑣3
Example 1.25. Björner’s example with 𝑐-vector (6, 15, 11), obtained
by starting with the projective plane and gluing on a facet:
𝑣1
𝑣3 𝑣2
𝑣1
𝑣5 𝑣6
𝑣4 𝑣6
𝑣4
𝑣2 𝑣3
𝑣1
There are many other examples that we cannot reasonably commu-
nicate here. More information about these and other simplicial com-
plexes, including downloadable versions, may be found online at the
Simplicial complex library [81] or the Library of triangulations [30]. There
is a whole branch of topology, called computational topology [54, 55, 93],
devoted to the computational study of these massive objects. One sub-
theme of this book will be to “tell these complexes apart.” What does
that even mean? We will see in Section 1.2.
1.1. Basics of simplicial complexes 27
1.1.2. Some combinatorics. In Exercise 1.9, you were asked to show
that a simplex of dimension 𝑖 has exactly 𝑖 + 1 codimension-1 faces.
While there are multiple ways to count such faces, one way is to let
𝜎 = 𝑎0 𝑎1 ⋯ 𝑎𝑖 be a simplex of dimension 𝑖. A face of 𝜎 with codimension
1 is by definition an (𝑖 − 1)-dimensional subset, that is, a subset of 𝜎 with
exactly 𝑖 elements. To express the idea more generally, let 𝑛 and 𝑘 be pos-
itive integers with 𝑛 ≥ 𝑘. Given 𝑛 objects, how many ways can I pick 𝑘
of them? You may have seen this question in a statistics and probability
or a discrete math course. This is a combination, sometimes denoted
by 𝑛 𝐶𝑘 . In this book, we use the notation (𝑛) to denote the number of
𝑘
ways we can choose 𝑘 objects from a set of 𝑛 objects. Now (𝑛) is a nice
𝑘
looking symbol, but we don’t actually know what this number is! Let us
use the following example to derive a formula for (𝑛).
𝑘
Example 1.26. Suppose we have a class of 𝑛 = 17 students and we
need to pick 𝑘 = 5 of them to each present a homework problem to
the class. How many different combinations of students can we pick?
One way to do this is to count something else and then remove the extra
factors. So let’s first count the number of ways we could arrange all the
students in a line. For the first person in line, we have 17 options. For the
second person in line, we have 16 options. For the third person in line,
we have 15 options. Continuing until the very last person, we obtain
17 ⋅ 16 ⋅ 15 ⋅ ⋯ ⋅ 2 ⋅ 1 = 17! possible lines. But we’re not interested in a line
of 17. We’re interested in only the first 5 so we need to get rid of the last
17!
17 − 5 = 12 people. This yields . This is the number of ways we
(17−5)!
can arrange 5 of the students, but it is counting all the different ways we
can order those 5 students. Because order does not matter, we need to
divide by the total number of ways to order 5 students, or 5!. This yields
17!
= (17).
(17−5)!5! 5
𝑛!
The same argument as above shows that in general (𝑛) = .
𝑘 (𝑛−𝑘)!𝑘!
Definition 1.27. Let 0 ≤ 𝑘 ≤ 𝑛 be non-negative integers. The number
of ways of choosing 𝑘 objects from 𝑛 objects, read 𝑛 choose 𝑘, is given by
𝑛!
( 𝑛) = . The value (𝑛) is also known as a binomial coefficient.
𝑘 (𝑛−𝑘)!𝑘! 𝑘
28 1. Simplicial complexes
There are many nice combinatorial identities that the binomial co-
efficients satisfy. These will be helpful throughout the text.
Proposition 1.28 (Pascal’s rule). Let 1 ≤ 𝑘 ≤ 𝑛 be positive integers.
Then (𝑛) = (𝑛−1) + (𝑛−1).
𝑘 𝑘−1 𝑘
Proof. We compute
𝑛−1 𝑛−1 (𝑛 − 1)! (𝑛 − 1)!
( )+( )= +
𝑘−1 𝑘 (𝑘 − 1)! (𝑛 − 1 − (𝑘 − 1))! 𝑘! (𝑛 − 1 − 𝑘)!
(𝑛 − 1)! 𝑘 (𝑛 − 1)! (𝑛 − 𝑘)
= +
𝑘! (𝑛 − 𝑘)! 𝑘! (𝑛 − 𝑘)!
(𝑛 − 1)! (𝑘 + 𝑛 − 𝑘)
=
𝑘! (𝑛 − 𝑘)!
(𝑛 − 1)! 𝑛
=
𝑘! (𝑛 − 𝑘)!
𝑛
= ( ). □
𝑘
This fact can also be proved combinatorially. Suppose we are given
𝑛 objects. Call one of them 𝑎. Choosing 𝑘 of the 𝑛 objects may be broken
up into choosing 𝑘 objects that include the element 𝑎 and choosing 𝑘
objects that do not include the element 𝑎. How many ways are there to
do the former? Since we must include 𝑎, there are now 𝑛 − 1 objects to
choose from, and since we have already included 𝑎 in our list of chosen
objects, there are 𝑘 − 1 objects left to choose; i.e., there are (𝑛−1) choices.
𝑘−1
To count the latter, we have 𝑛 − 1 total objects to choose from since we
are excluding 𝑎. But this time we may choose any 𝑘 of them; i.e., there
are (𝑛−1) choices. Thus (𝑛) = (𝑛−1) + (𝑛−1).
𝑘 𝑘 𝑘−1 𝑘
Problem 1.29. Prove that (𝑛)= ( 𝑛
).
𝑘 𝑛−𝑘
1.1.3. Euler characteristic. In this section we associate a number call-
ed the Euler characteristic to a simplicial complex. The Euler charac-
teristic generalizes the way that we count. The following motivation is
taken from Rob Ghrist’s excellent book on applied topology [76].
1.1. Basics of simplicial complexes 29
Consider a collection of points:
If we wish to count the number of points, we associate a weight of
+1 to each point for a total of 6. Now if we begin to add edges between
the points, we must subtract 1 for each line we add; that is, each line
contributes a weight of −1.
Now there are 6 − 3 = 3 objects. If we add an edge like
we don’t lose an object. In fact, we create a hole. Nevertheless, if we
continue with the philosophy that an edge has weight −1, we will still
count this as 6 − 4 = 2 for the 6 points minus the 4 edges. Now filling in
30 1. Simplicial complexes
this hole should “undo” the extra edge. In other words, a 2-simplex has
weight +1.
We now have a count of 6 − 4 + 1 = 3. In general, each odd-dimensional
simplex has a weight of +1 while each even-dimensional simplex has
a weight of −1. The Euler characteristic is this alternating sum of the
number of simplices of a complex in each dimension.
Definition 1.30. Let 𝐾 be an 𝑛-dimensional simplicial complex, and let
𝑐𝑖 (𝐾) denote the number of 𝑖-simplices of 𝐾. The Euler characteristic
𝑛
of 𝐾, 𝜒(𝐾), is defined by 𝜒(𝐾) ∶= ∑ (−1)𝑖 𝑐𝑖 (𝐾).
𝑖=0
Notice that this idea of “filling in” or “detecting” holes seems to be
taken into account by this Euler characteristic. It is unclear exactly what
a hole is at this point, but we should start to develop a sense of this idea.
Example 1.31. Given the 𝑐-vector of a complex, it is easy to compute
the Euler characteristic. Several examples are listed in Section 1.1.1. For
instance, the torus 𝑇 2 has 𝑐-vector (9, 27, 18) so that 𝜒(𝑇 2 ) = 9−27+18 =
0.
Example 1.32. If 𝐾 is the simplicial complex defined in Problem 1.8,
then 𝑐0 = 6, 𝑐1 = 7, and 𝑐2 = 2. Hence 𝜒(𝐾) = 𝑐0 −𝑐1 +𝑐2 = 6−7+1 = 1.
Exercise 1.33. Compute the Euler characteristic for the simplicial com-
plex defined in Example 1.5.
Problem 1.34. Let 𝑛 ≥ 3 be an integer, and let 𝐶𝑛 be the simplicial
complex defined on [𝑛 − 1] with facets given by {𝑣0 , 𝑣1 }, {𝑣1 , 𝑣2 }, {𝑣2 , 𝑣3 },
… , {𝑣𝑛−2 , 𝑣𝑛−1 }, {𝑣𝑛−1 , 𝑣0 }.
(i) Draw a picture of 𝐶3 , 𝐶4 , and 𝐶5 .
1.2. Simple homotopy 31
(ii) What is the dimension of 𝐶𝑛 ?
(iii) What is 𝜒(𝐶𝑛 )? Prove it by induction on 𝑛.
Problem 1.35. Recall that Δ𝑛 ∶= 𝒫([𝑛]) − {∅} is the 𝑛-simplex from
Example 1.13. Show that Δ𝑛 is a simplicial complex and compute 𝜒(Δ𝑛 ).
Since 𝑆 𝑛 is simply Δ𝑛+1 with the (𝑛 + 1)-simplex removed, it follows
that for all integers 𝑘 ≥ 0,
𝜒(Δ𝑛+1 ) + 1 if 𝑛 = 2𝑘,
𝜒(𝑆 𝑛 ) = {
𝜒(Δ𝑛+1 ) − 1 if 𝑛 = 2𝑘 + 1.
1.2. Simple homotopy
All branches of math have a notion of “sameness.” For example, in group
theory, two groups 𝐴 and 𝐵 are the “same” if there is a group isomor-
phism between them. In linear algebra, two vector spaces are the same
if there is a vector space isomorphism between them. What would it
mean for two simplicial complexes to be “the same”? In general, there
is more than one answer to this question, depending on the interests of
the mathematician. For our purposes, we are interested in simple ho-
motopy type.
Definition 1.36. Let 𝐾 be a simplicial complex and suppose that there
is a pair of simplices {𝜎(𝑝−1) , 𝜏(𝑝) } in 𝐾 such that 𝜎 is a face of 𝜏 and
𝜎 has no other cofaces. Then 𝐾 − {𝜎, 𝜏} is a simplicial complex called
an elementary collapse of 𝐾. The action of collapsing is denoted by
𝐾 ↘ 𝐾 − {𝜎, 𝜏}. On the other hand, suppose {𝜎(𝑝−1) , 𝜏(𝑝) } is a pair of
simplices not in 𝐾 where 𝜎 is a face of 𝜏 and all other faces of 𝜏 are in 𝐾
(and, consequently, all faces of 𝜎 are also in 𝐾). Then 𝐾 ∪ {𝜎(𝑝−1) , 𝜏(𝑝) } is
a simplicial complex called an elementary expansion of 𝐾, denoted by
𝐾 ↗ 𝐾 ∪ {𝜎(𝑝−1) , 𝜏(𝑝) }. For either an elementary collapse or an elemen-
tary expansion, such a pair {𝜎, 𝜏} is called a free pair. We say that 𝐾 and
𝐿 are of the same simple homotopy type, denoted by 𝐾 ∼ 𝐿, if there
is a series of elementary collapses and expansions from 𝐾 to 𝐿. In the
case where 𝐿 = {𝑣} = ∗ is a single point, we say that 𝐾 has the simple
homotopy type of a point.
32 1. Simplicial complexes
Exercise 1.37. Let 𝐾 be a simplicial complex and {𝜎, 𝜏} a free pair of 𝐾.
Show that 𝐾 − {𝜎, 𝜏} is a simplicial complex.
Simple homotopy has its origins in the work of J. H. C. Whitehead
[152]. It has interesting connections [21, 22] with Stong’s theory of finite
spaces [145], and there is even a book-length treatment of the subject
[45].
Problem 1.38. Show that simple homotopy is an equivalence relation
on the set of all simplicial complexes.
Example 1.39. We will consider a series of expansions and collapses of
the simplicial complex below to find another simplicial complex which
has the same simple homotopy type.
↘ ↘
↘ ↘
↘ ↘
↗ ↘
↘
1.2. Simple homotopy 33
Thus the original simplicial complex has the same simple homotopy
type as the above graph.
Once we have a notion of equivalence, an immediate question is
“What stays the same under this notion of equivalence?”
Definition 1.40. Let 𝐾 be a simplicial complex. A function 𝛼 which
associates a real number 𝛼(𝐾) to every simplicial complex 𝐾 is called a
simple homotopy invariant or invariant if 𝛼(𝐾) = 𝛼(𝐿) whenever
𝐾 ∼ 𝐿.
For example, suppose 𝐾 ∼ 𝐿. Do 𝐾 and 𝐿 have the same number of
vertices? Do 𝐾 and 𝐿 have the same number of holes? It depends what
we mean by a hole. In the above example, both complexes seem to have
two holes. We will make the notion of holes precise in Chapter 3.
Problem 1.41.
(i) Is dim an invariant? That is, if 𝐾 ∼ 𝐿, is dim(𝐾) = dim(𝐿)?
Prove it or give a counterexample.
(ii) Let 𝐾 ∼ 𝐿. Does it follow that |𝑉(𝐾)| = |𝑉(𝐿)|? Prove it or give
a counterexample.
Another number we have associated to a simplicial complex is the
Euler characteristic. The following proposition claims that the Euler
characteristic is a simple homotopy invariant.
Proposition 1.42. Suppose 𝐾 ∼ 𝐿. Then 𝜒(𝐾) = 𝜒(𝐿).
You will be asked to prove this in Problem 1.44. The following ex-
ercise may be useful in understanding how the proof of Proposition 1.42
works.
Exercise 1.43. Compute the Euler characteristics of the very first and
very last simplicial complexes in Example 1.39. Compute the Euler char-
acteristic of all the simplicial complexes in between.
Problem 1.44. Prove Proposition 1.42.
Example 1.45. A nice use of Proposition 1.42 is that it gives us a way
to show that two simplicial complexes do not have the same simple ho-
motopy type. Consider again the simplicial complex in Example 1.39.
34 1. Simplicial complexes
Through elementary collapses and expansions, we saw that it has the
same simple homotopy type as the following 𝐾:
How do we know we can’t perform more elementary expansions and
collapses to simplify this to 𝐿 given below?
Although we can’t collapse any further, perhaps we could expand
up to, say, a 20-dimensional simplex, and then collapse down to remove
a hole.2 We can lay all such speculations aside by simply computing the
Euler characteristics. Observe that 𝜒(𝐾) = 5 − 6 = −1 while 𝜒(𝐿) =
3 − 3 = 0. Since the Euler characteristics are different, 𝐾 ≁ 𝐿.
We may now distinguish some of the simplicial complexes we de-
fined in Section 1.1.1.
Problem 1.46. Determine through proof which of the following sim-
plicial complexes have the same simple homotopy type. Are there any
complexes which you can neither tell apart nor prove have the same sim-
ple homotopy type?3
• 𝑆1
• 𝑆3
• 𝑆4
• Δ2
• Δ3
Example 1.47. You showed in Problem 1.34 that a collection of simpli-
cial complexes 𝐶𝑛 all have the same Euler characteristic. This actually
follows from Proposition 1.42 and the fact that 𝐶𝑚 ∼ 𝐶𝑛 for any 𝑚, 𝑛 ≥ 3.
2
In fact, there are examples of simplicial complexes with no free pairs that can be expanded up
and then collapsed back down to a point. Example 1.67 discusses one such example.
3
Distinguishing them will have to wait until Chapters 4 and 8.
1.2. Simple homotopy 35
We prove this latter fact now by showing that 𝐶3 ∼ 𝐶4 ∼ ⋯ ∼ 𝐶𝑛 ∼
𝐶𝑛+1 ∼ ⋯. We proceed by induction on 𝑛. We will show that 𝐶3 ∼ 𝐶4
pictorially through elementary expansions and collapses:
↗ ↗ ↘
Hence 𝐶3 ∼ 𝐶4 . Now assume that there is an 𝑛 ≥ 4 such that 𝐶𝑖 ∼ 𝐶𝑖−1
for all 3 ≤ 𝑖 ≤ 𝑛. We will show that 𝐶𝑛 ∼ 𝐶𝑛+1 using the same set of
expansions and collapses going from 𝐶3 to 𝐶4 . Formally, let 𝑎𝑏 ∈ 𝐶𝑛
be a 1-simplex (so that 𝑎 and 𝑏 are vertices) and expand 𝐶𝑛 ↗ 𝐶𝑛 ∪
{𝑏′ , 𝑏𝑏′ } =∶ 𝐶𝑛′ where 𝑏′ is a new vertex. Furthermore, expand 𝐶𝑛′ ↗
″
𝐶𝑛′ ∪ {𝑎𝑏′ , 𝑎𝑏𝑏′ } =∶ 𝐶𝑛 . Since 𝑎𝑏 is not the face of any simplex in 𝐶𝑛 , the
″ ″ ″
pair {𝑎𝑏, 𝑎𝑏𝑏′ } is a free pair in 𝐶𝑛 and thus 𝐶𝑛 ↘ 𝐶𝑛 − {𝑎𝑏, 𝑎𝑏𝑏′ } = 𝐶𝑛+1 .
Hence 𝐶𝑛 ∼ 𝐶𝑛+1 and the result follows.
A special case of the Euler characteristic computation occurs when
𝐾 has the simple homotopy type of a point.
Proposition 1.48. Suppose that 𝐾 ∼ ∗. Then 𝜒(𝐾) = 1.
Problem 1.49. Prove Proposition 1.48.
Problem 1.50. Prove that 𝑆 1 ∼ 𝑀 where 𝑀 is the Möbius band of Ex-
ample 1.24.
1.2.1. Collapsibility. A special kind of simple homotopy type is ob-
tained when one exclusively uses collapses to simplify 𝐾 to a point.
Definition 1.51. A simplicial complex 𝐾 is collapsible if there is a se-
quence of elementary collapses
𝐾 = 𝐾0 ↘ 𝐾1 ↘ … ↘ 𝐾𝑛−1 ↘ 𝐾𝑛 = {𝑣}.
Notice the difference between a simplicial complex being collapsible
and its having the simple homotopy type of a point. With the latter we
allow expansions and collapses, while with the former we only allow
collapses.
Definition 1.52. Let 𝐾 and 𝐿 be two simplicial complexes with no ver-
tices in common. Define the join of 𝐾 and 𝐿, written 𝐾 ∗ 𝐿, by
𝐾 ∗ 𝐿 ∶= {𝜎, 𝜏, 𝜎 ∪ 𝜏 ∶ 𝜎 ∈ 𝐾, 𝜏 ∈ 𝐿}.
36 1. Simplicial complexes
Example 1.53. Let 𝐾 ∶= {𝑎, 𝑏, 𝑐, 𝑎𝑏, 𝑏𝑐} and 𝐿 ∶= {𝑢, 𝑣, 𝑢𝑣}. Then the
join 𝐾 ∗ 𝐿 is given by
𝑢 𝑣
𝑎 𝑐
Notice that there is a copy of 𝐾 at the bottom of the picture and a
copy of 𝐿 at the top. As a set, 𝐾 ∗ 𝐿 = ⟨𝑎𝑏𝑢𝑣, 𝑏𝑐𝑢𝑣⟩.
Exercise 1.54. (i) Show that 𝐾 ∗ 𝐿 is a simplicial complex.
(ii) Prove that 𝐾 ∗ {∅} = 𝐾.
Problem 1.55. Show that if 𝐾 ↘ 𝐾 ′ , then 𝐾 ∗ 𝐿 ↘ 𝐾 ′ ∗ 𝐿.
We have already seen one example of a join. The cone on 𝐾, which
we defined in Example 1.17, is just a special case of a join with a single
vertex; that is, the cone over 𝐾 can be defined by 𝐶𝐾 ∶= 𝐾 ∗ {𝑣}. There
is another special case of the join, called the suspension.
Definition 1.56. Let 𝐾 be a simplicial complex with 𝑣, 𝑤 ∉ 𝐾 and let
𝑤 ≠ 𝑣. Define the suspension of 𝐾 by Σ𝐾 ∶= 𝐾 ∗ {𝑣, 𝑤}.
Exercise 1.57. Let 𝐾 be the simplicial complex from Exercise 1.18. Draw
Σ𝐾.
Exercise 1.57 helps one to see that the suspension of 𝐾 is two cones
on 𝐾. This, however, is not to be confused with the cone of the cone, or
double cone, as the following exercise illustrates.
Exercise 1.58. Let 𝐾 = {𝑢, 𝑤, 𝑢𝑤}. Draw Σ𝐾 and 𝐶(𝐶𝐾), and conclude
that in general Σ𝐾 ≠ 𝐶(𝐶𝐾).
Proposition 1.59. The cone 𝐶𝐾 over any simplicial complex is collapsi-
ble.
1.2. Simple homotopy 37
Proof. We prove the result by induction on 𝑛, the number of simplices
in the complex 𝐾. For 𝑛 = 1, there is only one simplicial complex with
a single simplex, namely, a 0-simplex. The cone on this is clearly col-
lapsible. Now suppose by the inductive hypothesis that 𝑛 ≥ 1 is given
and that the cone on any simplicial complex with 𝑛 simplices is collapsi-
ble. Let 𝐾 be a simplicial complex on 𝑛 + 1 simplices, and consider
𝐶𝐾 = 𝐾 ∗ {𝑣}. For any facet 𝜎 of 𝐾, observe that {𝜎 ∪ {𝑣}, 𝜎} is a free
pair in 𝐶𝐾 since 𝜎 ∪ {𝑣} is a facet of 𝐶𝐾 and 𝜎 is not a face of any other
simplex. Hence 𝐶𝐾 ↘ 𝐶𝐾 −{𝜎∪{𝑣}, 𝜎}. But 𝐶𝐾 −{𝜎∪{𝑣}, 𝜎} = 𝐶(𝐾 −{𝜎})
(Problem 1.60) is a cone on 𝑛 − 1 simplices and, by the inductive hypoth-
esis, collapsible. Thus 𝐶𝐾 ↘ 𝐶𝐾 − {𝜎 ∪ {𝑣}, 𝜎} ↘ ∗, and all cones are
collapsible. □
Problem 1.60. Let 𝐾 be a simplicial complex and 𝜎 a facet of 𝐾. If 𝐶𝐾 =
𝐾 ∗ {𝑣}, prove that 𝐶𝐾 − {𝜎 ∪ {𝑣}, 𝜎} = 𝐶(𝐾 − {𝜎}).
Problem 1.61. Show that 𝐾 is collapsible if and only if 𝐶𝐾 ↘ 𝐾.
Exercise 1.62. Show that the suspension is not in general collapsible
but that there exist collapsible subcomplexes 𝐾1 and 𝐾2 (not necessarily
disjoint) of Σ𝐾 such that 𝐾1 ∪ 𝐾2 = Σ𝐾.
Problem 1.63. Prove that 𝜒(Σ𝐾) = 2 − 𝜒(𝐾).
Problem 1.64. Prove that Δ𝑛 is collapsible for every 𝑛 ≥ 1.
Because simple homotopy involves both elementary collapses and
expansions, we have the following.
Proposition 1.65. If 𝐾 ↘ 𝐻 and 𝐻 ↘ 𝐿, then 𝐾 ∼ 𝐻 ∼ 𝐿.
Remark 1.66. The above proposition along with Proposition 1.59 and
Problem 1.64 combine to show that Δ𝑛 ∼ 𝐶𝐾 ∼ ∗ so that all of these
have the same simple homotopy type.
What about the converse of Proposition 1.65 in the special case
where 𝐻 ∼ ∗? That is, if 𝐾 ∼ ∗, is 𝐾 collapsible? If not, we would need to
find a simplicial complex which can’t be collapsed down to a point right
away—it would need to be expanded up and then collapsed down (possi-
bly multiple times) before it could be collapsed to a point. Although the
proof of the existence of such a simplicial complex is beyond the scope
38 1. Simplicial complexes
of this book the dunce cap is one example of a simplicial complex that
has the simple homotopy type of a point but is not collapsible.
Example 1.67. Refer back to Example 1.22 for the definition of the
dunce cap 𝐷. It is easy to see that 𝜒(𝐷) = 1 but 𝐷 is not collapsible.
Bruno Benedetti and Frank Lutz showed that the order in which col-
lapses are performed makes a difference. They constructed a simplicial
complex with only eight vertices that can be collapsed to a point. Nev-
ertheless, they also proved that there is a way to collapse the same com-
plex to a dunce cap. The proof of this result is a bit technical, as great
care must be taken in removing the proper sequence of free pairs in the
right order. See [31, Theorem 1]. Hence, one can find a sequence of
collapses that results in a point and another sequence of collapses that
results in a dunce cap. Benedetti and Lutz have also constructed exam-
ples of complexes that “look like” a ball (simplicial 3-balls) but are not
collapsible [32].
Exercise 1.68. Show that the dunce cap is not collapsible.
Even though the dunce cap is not collapsible, the result of Benedetti
and Lutz implies that 𝐷 ∼ ∗. This, combined with some of our work
above (including the study of the Euler characteristic), has allowed us to
distinguish several of the complexes proposed in Section 1.1.1 as well as
to determine which are the same.
Problem 1.69. Use the 𝑐-vectors in Section 1.1.1 as well as your work
in the Exercises and Problems in this chapter to fill in the table below.
Δ𝑛 𝑆𝑛 ∗ 𝐶𝐾 𝑇2 𝑃2 𝐷 𝒦 𝑀 𝐵
𝜒=
Which complexes have the same simple homotopy type? Which
complexes can you still not tell apart?
Further references on simplicial complexes and their generaliza-
tions are plentiful. The books [64, 85, 134] are standard references for
simplicial complexes. Another excellent reference is Jakob Jonsson’s
unpublished notes on simplicial complexes [89], which include an intro-
duction to discrete Morse theory. The material in this section is closely
related to combinatorial topology [77,78] as well as piecewise-linear (PL)
1.2. Simple homotopy 39
topology [138]. Other interesting uses of simplicial complexes arise in
combinatorial algebraic topology [103] and topological combinatorics
[50].
Chapter 2
Discrete Morse theory
In this chapter, we introduce discrete Morse theory in three sections.
These sections are concerned with different ways of communicating a
discrete Morse function. As we introduce these definitions, we will also
build the theory by relating the definitions to one another. We will see
that there are varied and diverse ways of thinking about a discrete Morse
function, giving us many options in attacking a problem. In this sense,
discrete Morse theory is like a Picasso painting; “looking at it from differ-
ent perspectives produces completely different impressions of its beauty
and applicability”[47].
But first, we revisit the question of what discrete Morse theory can
do. In this book, discrete Morse theory is a tool used to help us study
simplicial complexes. It will help us
(a) detect and bound the number of holes in a simplicial complex;
(b) replace a simplicial complex with an equivalent one that is
smaller.
Now it is still not clear what we mean by a hole. But at this point,
we should have some intuition and ideas about what a hole should be.
For the second point, we do know what it means to replace a simplicial
complex by an equivalent one—namely, a simplicial complex with the
41
42 2. Discrete Morse theory
same simple homotopy type. Actually, we will be able to say something
a bit stronger. We will be able to use only collapses (no expansions) to re-
place a complex with a different complex that has fewer simplices.1 This
is especially helpful when one is interested in only topological informa-
tion of the complex. Think, for example, of the reduction mentioned
in Section 0.3. If you revisit this example, you can see that the original
complex we built collapses to a much smaller complex. Such a sequence
of collapses can be found using discrete Morse theory.
Consider the sequence of collapses in Example 1.39. While this ex-
ample was a thorough demonstration of our collapsing strategy, it was
quite cumbersome to write. Indeed, it took up half a page! Is there a
more succinct way to communicate this sequence of collapses? Suppose
that instead of drawing each simplicial complex at each stage, we draw a
single simplicial complex with some decorations on it. The decorations
will indicate which simplices to collapse. These decorations will take the
form of an arrow indicating the direction of the collapse. So the picture
would be shorthand for
If we want to communicate more collapses, we add more arrows.
In general, an arrow has its tail (or “start”) in a simplex and its head
in a codimension-1 face. This should be thought of as a free pair, or at
1
In addition, we will show that one can make further reductions using the methods of Chapter
8. This is probably the most practical reduction.
2. Discrete Morse theory 43
least a pair that will eventually be free after other pairs are removed. The
entire sequence of collapses from Example 1.39 (excluding the expansion
followed by two collapses at the end) could be communicated by the
decorated simplicial complex
Note that no order to the collapses is specified. In Section 2.1, we’ll
see how to specify an exact order. But for now we’ll leave the order of
the collapses to the side. Another issue to consider is whether all as-
signments of arrows make sense or lead to a well-defined rule. You will
investigate this question in the following exercises.
Exercise 2.1. Consider the simplicial complex
Does this determine a sequence of collapses? Why or why not?
Exercise 2.2. Also consider the following variation:
Does this determine a sequence of collapses? Why or why not?
Furthermore, recall that in Example 1.39 we eventually had to stop.
We had something “blocking” us. What if we just rip out whatever is
blocking us?
→
44 2. Discrete Morse theory
From there, we may continue collapsing via arrows. If we get stuck
again, we rip out the obstruction. The resulting picture would still have
all the same arrows on it, but the pieces that were ripped out would be
unlabeled.
These are the basic ideas of discrete Morse theory. With all of these ideas
in mind, we now investigate the formal definition of a discrete Morse
function.
2.1. Discrete Morse functions
2.1.1. Basic discrete Morse functions. We begin by defining a sim-
pler kind of discrete Morse function which we call a basic discrete Morse
function. It is due to Bruno Benedetti.
Definition 2.3. Let 𝐾 be a simplicial complex. A function 𝑓 ∶ 𝐾 → ℝ
is called weakly increasing if 𝑓(𝜎) ≤ 𝑓(𝜏) whenever 𝜎 ⊆ 𝜏. A basic
discrete Morse function 𝑓 ∶ 𝐾 → ℝ is a weakly increasing function
which is at most 2–1 and satisfies the property that if 𝑓(𝜎) = 𝑓(𝜏), then
either 𝜎 ⊆ 𝜏 or 𝜏 ⊆ 𝜎.
A function 𝑓 ∶ 𝐴 → 𝐵 is 2–1 if for every 𝑏 ∈ 𝐵, there are at most two
values 𝑎1 , 𝑎2 ∈ 𝐴 such that 𝑓(𝑎1 ) = 𝑓(𝑎2 ) = 𝑏. In other words, at most
two elements in 𝐴 are sent to any single element in 𝐵.
2.1. Discrete Morse functions 45
Example 2.4. The following is an example of a basic discrete Morse
function.
1
3 6
1
3 6
0
2 2 5 4
15 11
10 14 12 11 7
14 13
9 9 8 8 7
In order to verify that this is indeed a basic discrete Morse function,
three things need to be checked. First, we ask whether 𝑓 is weakly in-
creasing. Second, we check if 𝑓 is 2–1. Finally, we need to check that if
𝑓(𝜎) = 𝑓(𝜏), then either 𝜎 ⊆ 𝜏 or 𝜏 ⊆ 𝜎. If the answer to each of these
questions is “yes,” then we have a basic discrete Morse function.
Exercise 2.5. Verify that the function in Example 2.4 is a basic discrete
Morse function.
Example 2.6. Another example of a basic discrete Morse function, this
time on a graph, is given by
3 5 8
4 5 8
1
1 0 2 2
6
46 2. Discrete Morse theory
Definition 2.7. Let 𝑓 ∶ 𝐾 → ℝ be a basic discrete Morse function. A
simplex 𝜎 of 𝐾 is critical if 𝑓 is injective on 𝜎. Otherwise, 𝜎 is called reg-
ular. If 𝜎 is a critical simplex, the value 𝑓(𝜎) is called a critical value.
If 𝜎 is a regular simplex, the value 𝑓(𝜎) is called a regular value.
Remark 2.8. Note the distinction between a critical simplex and a criti-
cal value. A critical simplex is a simplex, such as 𝜎 or 𝑣, which is critical.
A critical value is a real number given by the labeling or output of the
discrete Morse function on a simplex.
Example 2.9. Consider another example:
13
13
4 0 3
4 7
6 5 9 1 3 12 11
5 9 2 12 11
1 2
8 10 10
Again, one can check that this satisfies the definition of a basic dis-
crete Morse function. The critical simplices are the ones labeled 0, 6, and
7. All other simplices are regular.
Example 2.10. The critical values in Example 2.6 are 0, 3, 4, 6, and 7.
The regular values are 1, 5, 2, and 8.
Exercise 2.11. Identify the critical and regular simplices in Example 2.4.
2.1. Discrete Morse functions 47
Example 2.12. For a more involved example, let us put a basic dis-
crete Morse function on the torus from Example 1.20. Recall that the
top and bottom are “glued” together, and so are the sides. In particular,
the “four” corner vertices are really the same vertex 𝑣0 .
8 𝑣1 3 𝑣2 0
𝑣0 17 13 22 𝑣0
17 13 22
16 8 11 3 21 5
16 25 21
5
𝑣4 6 25 20
6 2 𝑣4
28 26 20
28 26 2 10 7 9
30 14 12
𝑣3 4
27 14 12
1 7 𝑣3
27 19 24
1 18 19 24 23 4
18 29 23
𝑣0 𝑣1 𝑣2 𝑣0
The reader should verify that this is a basic discrete Morse function
and find all the critical and regular simplices.
48 2. Discrete Morse theory
Exercise 2.13.
Consider the following simplicial complex 𝐾:
(i) Find a basic discrete Morse function on 𝐾 with one critical sim-
plex.
(ii) Find a basic discrete Morse function on 𝐾 with three critical
simplices.
(iii) Find a basic discrete Morse function on 𝐾 where every simplex
is critical.
(iv) Can you find a basic discrete Morse function on 𝐾 with two
critical simplices? Zero critical simplices?
Exercise 2.14. Find a basic discrete Morse function with six critical sim-
plices on the Möbius band (Example 1.24).
2.1.2. Forman definition. One advantage of working with a basic dis-
crete Morse function is that the critical simplices are easy to identify.
They are the simplices with a unique value. At the same time, this can
be a drawback since we may want multiple critical simplices to have the
same value. The original definition of a discrete Morse function, articu-
lated by the founder of discrete Morse theory, Robin Forman [65], allows
for this. It is the definition that tends to be used in most of the literature.
Definition 2.15. A discrete Morse function 𝑓 on 𝐾 is a function
𝑓 ∶ 𝐾 → ℝ such that for every 𝑝-simplex 𝜎 ∈ 𝐾, we have
|{𝜏(𝑝−1) < 𝜎 ∶ 𝑓(𝜏) ≥ 𝑓(𝜎)}| ≤ 1
2.1. Discrete Morse functions 49
and
|{𝜏(𝑝+1) > 𝜎 ∶ 𝑓(𝜏) ≤ 𝑓(𝜎)}| ≤ 1.
The above definition can be a bit difficult to parse. We’ll illustrate
with several examples, but the basic idea is that as a general rule, higher-
dimensional simplices have higher values and lower-dimensional sim-
plices have lower values; that is, the function generally increases as you
increase the dimension of the simplices. But we allow at most one ex-
ception per simplex. So, for example, if 𝜎 is a simplex of dimension 𝑝, all
of its (𝑝 − 1) faces must be given a value strictly less than the value of 𝜎,
with at most one exception. Similarly, all of the (𝑝 + 1)-dimensional co-
faces of 𝜎 must be given a value strictly greater than 𝜎, with at most one
exception. We’ll see in Lemma 2.24, called the exclusion lemma, that
the exceptions cannot both occur for the same simplex. But for now, the
easiest way to understand the definition is to work through some exam-
ples.
Example 2.16. Consider the graph below with labeling
12 13
14
15 11
8 4
7 10
9
6
5
2 3
16
1
1
0 2
Let’s investigate whether this is a discrete Morse function. We have
to check that each simplex satisfies the rules of Definition 2.15. Let’s
start with the vertex labeled 0 in the bottom left corner. The rule is that
every coface of that vertex must be labeled something greater than 0,
with possibly a single exception. Both cofaces in this case happen to be
labeled 1, which is greater than 0. The vertex is not the face of anything,
so there is nothing more to check for this simplex. What about the edge
50 2. Discrete Morse theory
labeled 13 at the top? It is not the coface of any simplex, but it does have
two faces. Both of these faces must be given a value less than 13, with
possibly one exception. One of the faces is labeled 12, which is less than
13, but the other is labeled 14, which is greater than 13. That’s okay—
that is our one allowed exception for that edge. Now do this for all the
other simplices to see that 𝑓 is indeed a discrete Morse function.
Problem 2.17. Show that every basic discrete Morse function is a dis-
crete Morse function, but not vice versa.
Problem 2.18. Determine if each of the following labelings defines a
discrete Morse function. If not, change some numbers to make it into
one.
(i)
2 4 4
4 6 7
0 4 1 1 5
(ii)
7 4
6
6
3 3 10
8
3 1 2
5 6
6 5
2.1. Discrete Morse functions 51
(iii)
9
5 1 7
16
5
8 18 14 9
15
0 4 4 4 3 11 3
(iv)
2 2 1
4 4
2
5
3 1
Definition 2.19. A 𝑝-simplex 𝜎 ∈ 𝐾 is said to be critical with respect
to a discrete Morse function 𝑓 if
|{𝜏(𝑝−1) < 𝜎 ∶ 𝑓(𝜏) ≥ 𝑓(𝜎)}| = 0
and
|{𝜏(𝑝+1) > 𝜎 ∶ 𝑓(𝜏) ≤ 𝑓(𝜎)}| = 0.
If 𝜎 is a critical simplex, the number 𝑓(𝜎) ∈ ℝ is called a critical
value. Any simplex that is not critical is called a regular simplex, while
any output value of the discrete Morse function which is not a critical
value is a regular value.
In other words, the critical simplices are those simplices which do
not admit any “exceptions.”
Example 2.20. Upon investigation, we see that the critical values in Ex-
ample 2.16 are 1, 5, 8, 15, and 16.
52 2. Discrete Morse theory
Exercise 2.21. Let 𝐾 be a simplicial complex. Show that there is a dis-
crete Morse function on 𝐾 such that every simplex of 𝐾 is critical.
Exercise 2.22. Let 𝐾 be a simplicial complex, 𝑉 = {𝑣0 , 𝑣1 , … , 𝑣𝑛 } the
vertex set, and 𝑓0 ∶ 𝑉 → ℝ>0 any function. For any 𝜎 ∈ 𝐾, write 𝜎 ∶=
{𝑣𝑖1 , 𝑣𝑖2 , … , 𝑣𝑖𝑘 }.
(i) Prove that 𝑓 ∶ 𝐾 → ℝ>0 defined by 𝑓(𝜎) ∶= 𝑓0 (𝑣𝑖1 ) + 𝑓0 (𝑣𝑖2 ) +
⋯ + 𝑓0 (𝑣𝑖𝑘 ) is a discrete Morse function. What are the critical
simplices? Is this discrete Morse function basic?
(ii) Let 𝑓 ∶ 𝐾 → ℝ>0 be defined by 𝑓(𝜎) ∶= 𝑓0 (𝑣𝑖1 ) ⋅ 𝑓0 (𝑣𝑖2 ) ⋅ ⋯ ⋅
𝑓0 (𝑣𝑖𝑘 ). Is 𝑓 a discrete Morse function? If so, prove it. If not,
give a counterexample.
Your work in Exercise 2.22 yields a discrete Morse function from a
set of values on the vertices but it is not a very good one. To be frank,
it’s abysmal. A much better way to put a discrete Morse function on
a complex from a set of values on the vertices is given by H. King, K.
Knudson, and N. Mramor in [97]. We will give an algorithm for their
construction in Section 9.1.
Problem 2.23. Prove that a 𝑝-simplex 𝜎 is regular if and only if either
of the following conditions holds:
(i) There exists 𝜏(𝑝+1) > 𝜎 such that 𝑓(𝜏) ≤ 𝑓(𝜎).
(ii) There exists 𝜈(𝑝−1) < 𝜎 such that 𝑓(𝜈) ≥ 𝑓(𝜎).
The following, sometimes referred to as the exclusion lemma, is a
very simple observation but one of the key insights into the utility of
discrete Morse theory. We will refer back to it often.
Lemma 2.24 (Exclusion lemma). Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse
function and 𝜎 ∈ 𝐾 a regular simplex. Then conditions (i) and (ii) in
Problem 2.23 cannot both be true. Hence, exactly one of the conditions
holds whenever 𝜎 is a regular simplex.
Proof. Writing 𝜎 = 𝑎0 𝑎1 ⋯ 𝑎𝑝−1 𝑎𝑝 and renaming the elements if
necessary, suppose by contradiction that 𝜏 = 𝑎0 ⋯ 𝑎𝑝 𝑎𝑝+1 > 𝜎 and
2.1. Discrete Morse functions 53
𝜈 = 𝑎0 ⋯ 𝑎𝑝𝑖−1 < 𝜎 satisfy 𝑓(𝜏) ≤ 𝑓(𝜎) ≤ 𝑓(𝜈). Observe that 𝜎̃ ∶=
𝑎0 𝑎1 ⋯ 𝑎𝑝−1 𝑎𝑝+1 satisfies 𝜈 < 𝜎̃ < 𝜏. As 𝜈 < 𝜎 and 𝑓(𝜈) ≥ 𝑓(𝜎), it
follows that 𝑓(𝜈) < 𝑓(𝜎)̃ since 𝜈 < 𝜎.̃ Similarly, 𝑓(𝜎)̃ < 𝑓(𝜏). Hence
𝑓(𝜏) ≤ 𝑓(𝜎) ≤ 𝑓(𝜈) < 𝑓(𝜎)̃ < 𝑓(𝜏),
which is a contradiction. Thus, when 𝜎 is a regular simplex, exactly one
of the conditions in Problem 2.23 holds. □
Problem 2.25. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function. Is it possible
to have a pair of simplices 𝜏(𝑖) < 𝜎(𝑝) in 𝐾 with 𝑖 < 𝑝 − 1 such that
𝑓(𝜏) > 𝑓(𝜎)? If yes, give an example. If not, prove it. Does the result
change if 𝑓 is a basic discrete Morse function?
Problem 2.26. Prove that if 𝑓 ∶ 𝐾 → ℝ is a discrete Morse function,
then 𝑓 has at least one critical simplex (namely, a critical 0-simplex).
2.1.3. Forman equivalence. Upon investigation of discrete Morse
functions, an immediate question arises: is there a notion of “same” dis-
crete Morse functions? For example, if we add .01 to all of the values
in any discrete Morse function, we would technically have a different
function, but for all practical purposes such a function is no different.
We thus need a notion of equivalence or sameness of discrete Morse
functions. Since the defining characteristic of discrete Morse functions
seems to be the relationship of the values to each other, we make the
following definition:
Definition 2.27. Two discrete Morse functions 𝑓 and 𝑔 on 𝐾 are said to
be Forman equivalent if for every pair of simplices 𝜎(𝑝) < 𝜏(𝑝+1) in 𝐾,
we have 𝑓(𝜎) < 𝑓(𝜏) if and only if 𝑔(𝜎) < 𝑔(𝜏).
Exercise 2.28. Show that Forman equivalence is an equivalence rela-
tion on the set of all discrete Morse functions on a fixed simplicial com-
plex 𝐾.
Exercise 2.29. Prove that 𝑓 and 𝑔 are Forman equivalent if and only if
for every pair of simplices 𝜎(𝑝) < 𝜏(𝑝+1) in 𝐾, we have 𝑓(𝜎) ≥ 𝑓(𝜏) if and
only if 𝑔(𝜎) ≥ 𝑔(𝜏).
54 2. Discrete Morse theory
Example 2.30. Consider the two discrete Morse functions 𝑓, 𝑔 ∶ 𝐾 → ℝ
with 𝑓 on the left and 𝑔 on the right:
4 9
8 3 12 4
8 6
2 2
3 3 1 11
6 3 5 8
7 10 4 7
8 5
For convenience, we give names to the vertices and recall the con-
vention that a simplex can be communicated as a concatenation of its
vertices.
𝑣4
𝑣2
𝑣1 𝑣3
𝑣5
To check if the two are Forman equivalent, we must check that 𝑓
“does the same thing” as 𝑔 does for any simplex/codimension-1 face pair,
and vice versa. So, for example, 𝑓(𝑣4 ) = 4 < 8 = 𝑓(𝑣4 𝑣1 ). Does the
same inequality hold for 𝑔? Yes, as 𝑔(𝑣4 ) = 9 < 12 = 𝑔(𝑣4 𝑣1 ). Now
𝑓(𝑣5 ) = 8 ≥ 7 = 𝑓(𝑣5 𝑣1 ), and 𝑔 likewise has the same relationship
on this pair as 𝑔(𝑣5 ) = 5 ≥ 4 = 𝑔(𝑣5 𝑣1 ). Checking this for every pair of
2.1. Discrete Morse functions 55
simplices 𝜎(𝑝) < 𝜏(𝑝+1) , we see that 𝑓(𝜎) < 𝑓(𝜏) if and only if 𝑔(𝜎) < 𝑔(𝜏).
Hence 𝑓 and 𝑔 are Forman equivalent.
There is a nice characterization of Forman-equivalent discrete
Morse functions which we investigate in Section 2.2.1. A special kind
of discrete Morse function that is a bit more “well-behaved” is one in
which all the critical values are different.
Definition 2.31. A discrete Morse function 𝑓 ∶ 𝐾 → ℝ is said to be
excellent if 𝑓 is 1–1 on the set of critical simplices.
In other words, a discrete Morse function is excellent if all the crit-
ical values are distinct. We explore the controlled behavior of excellent
discrete Morse functions in Section 5.1.1.
Problem 2.32. Show that every basic discrete Morse function is excel-
lent.
The following lemma claims that up to Forman equivalence, we may
assume that any discrete Morse function is excellent.
Lemma 2.33. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function. Then there is
an excellent discrete Morse function 𝑔 ∶ 𝐾 → ℝ which is Forman equiv-
alent to 𝑓.
Proof. Let 𝜎1 , 𝜎2 ∈ 𝐾 be critical simplices such that 𝑓(𝜎1 ) = 𝑓(𝜎2 ). If no
such simplices exist, then we are done. Otherwise, we define 𝑓′ ∶ 𝐾 → ℝ
by 𝑓′ (𝜏) = 𝑓(𝜏) for all 𝜏 ≠ 𝜎1 and 𝑓′ (𝜎1 ) = 𝑓(𝜎1 ) + 𝜖 where 𝑓(𝜎1 ) + 𝜖 is
strictly less than the smallest value of 𝑓 greater than 𝑓(𝜎1 ). Then 𝜎1 is a
critical simplex of 𝑓′ , and 𝑓′ is equivalent to 𝑓. Repeat the construction
for any two simplices of 𝑓′ that share the same critical value. Since 𝑓
has a finite number of critical values, the process terminates with an
excellent discrete Morse function 𝑔 that is equivalent to 𝑓. □
Problem 2.34. Let 𝐾 be collapsible. Prove that there exists a discrete
Morse function 𝑓 on 𝐾 with exactly one critical simplex.
Problem 2.35. Show that there exists a discrete Morse function on 𝑆 𝑛
with exactly two critical simplices—namely, a critical 0-simplex and a
critical 𝑛-simplex.
56 2. Discrete Morse theory
2.2. Gradient vector fields
With the Forman definition under our belts, we now show how to deter-
mine the gradient vector field or set of arrows that we saw at the begin-
ning of the chapter. We begin with an example to see how to pass from
the Forman definition of a discrete Morse function to a set of arrows.
Example 2.36. Let 𝐺 be the simplicial complex from Example 2.16. The
idea of the gradient vector field is to graphically show what the discrete
Morse function is doing to the complex 𝐾.
12 13
14
15 11
84
7 10
9
6
5
2 3
16
1
1
0 2
Whenever a simplex 𝜎 has a value greater than or equal to one of
its codimension-1 cofaces 𝜏, we draw an arrow on 𝜏 (head of the arrow)
pointing away from 𝜎 (tail of the arrow) and remove all numerical values.
Thus the above graph is transformed into
Observe that in the above complex, an edge is critical if and only if
it is not the head of an arrow, and a vertex is critical if and only if it is not
the tail of an arrow. It is also easy to see that we do not have a discrete
Morse function if either i) a vertex is the tail of more than one arrow or
2.2. Gradient vector fields 57
ii) an edge is the head of more than one arrow. Of course, this can only
be drawn up to dimension 3, but the idea generalizes to any dimension.
We now make the proper general definition and discuss how the above
observations hold in general.
Definition 2.37. Let 𝑓 be a discrete Morse function on 𝐾. The induced
gradient vector field 𝑉𝑓 , or 𝑉 when the context is clear, is defined by
𝑉𝑓 ∶= {(𝜎(𝑝) , 𝜏(𝑝+1) ) ∶ 𝜎 < 𝜏, 𝑓(𝜎) ≥ 𝑓(𝜏)}.
If (𝜎, 𝜏) ∈ 𝑉𝑓 , (𝜎, 𝜏) is called a vector, an arrow, or a matching.
The element 𝜎 is a tail while 𝜏 is a head.
Remark 2.38. The concept of the gradient vector field is quite rich. We
note that there are three ways in which to view a gradient vector field.
The above definition views it as a set. In Chapter 7, we identify the gradi-
ent vector field with a discrete Morse function. Finally, Chapter 8 views
the gradient vector field as a function on an algebraic structure.
To see the connection between the arrows drawn in Example 2.36
and Definition 2.37, we give names to the simplices in Example 2.36.
Example 2.39. We label the graph in Example 2.36 as follows:
𝑔
ℎ
𝑒
𝑑 𝑓
𝑎 𝑏
Then 𝑉𝑓 = {(ℎ, 𝑔ℎ), (𝑔, 𝑔𝑓), (𝑓, 𝑒𝑓), (𝑒, 𝑏𝑒), (𝑏, 𝑎𝑏), (𝑐, 𝑎𝑐), (𝑑, 𝑑𝑐)}. In
other words, an element (𝜎, 𝜏) of the induced gradient vector field may
be thought of as an arrow induced by 𝑓 on the simplicial complex with
𝜎 the tail and 𝜏 the head.
58 2. Discrete Morse theory
Problem 2.40. Let 𝑓 ∶ 𝐾 → ℝ be the discrete Morse function given by
8 7 6
5
0 1 2
4
10 3
13
0 8 6 12
7
3
6
4 6
Find the induced gradient vector field 𝑉𝑓 . Communicate this both by
drawing the gradient vector field on 𝐾 and by writing down the elements
in the set 𝑉𝑓 .
Problem 2.41. Find the induced gradient vector field of the function on
the torus given in Example 2.12.
Remark 2.42. Given the induced gradient vector fields in both Exam-
ple 2.39 and Problem 2.40, we see that every simplex is either exactly one
tail, exactly one head, or not in the induced gradient vector field (i.e., is
critical). We have already proven that this phenomenon holds in gen-
eral, although not in this language. If we let 𝜎 be a simplex of 𝐾 and 𝑓 a
discrete Morse function on 𝐾, Lemma 2.24 yields that exactly one of the
following holds:
i) 𝜎 is the tail of exactly one arrow.
ii) 𝜎 is the head of exactly one arrow.
iii) 𝜎 is neither the head nor the tail of an arrow; that is, 𝜎 is critical.
Conversely, we may ask if the three conditions in Remark 2.42 al-
ways yield a gradient vector field induced by some discrete Morse func-
tion. Such a partition of the simplices of a simplicial complex 𝐾 is a
discrete vector field on 𝐾. The formal definition is given below.
Definition 2.43. Let 𝐾 be a simplicial complex. A discrete vector field
𝑉 on 𝐾 is defined by
𝑉 ∶= {(𝜎(𝑝) , 𝜏(𝑝+1) ) ∶ 𝜎 < 𝜏, each simplex of 𝐾 in at most one pair}.
2.2. Gradient vector fields 59
Exercise 2.44. Show that every gradient vector field is a discrete vector
field.
To state the converse of Exercise 2.44 using our new language, we
ask: “Is every discrete vector field a gradient vector field of some discrete
Morse function 𝑓?”
Example 2.45. Consider the discrete vector field on the following sim-
plicial complex:
𝑐 𝑑 𝑒 𝑓
𝑎 𝑏
It is clear that for each simplex, exactly one of the conditions i), ii),
and iii) holds; that is, the above is a discrete vector field. Can we find a
discrete Morse function that induces this discrete vector field, making it
a gradient vector field? Reading off the conditions that such a gradient
vector field would impose, such a discrete Morse function 𝑓 must satisfy
𝑓(𝑎) ≥ 𝑓(𝑎𝑏) > 𝑓(𝑏) ≥ 𝑓(𝑏𝑐) > 𝑓(𝑐) ≥ 𝑓(𝑎𝑐) > 𝑓(𝑎),
which is impossible. Thus the above discrete vector field is not induced
by any discrete Morse function.
While all the arrows on the simplicial complex in Example 2.45 form
the discrete vector field, it was following a particular set of arrows that
led us to a contradiction. The problem is evidently that the arrows can-
not form a “closed path.” We will show in Theorem 2.51 below that if
the discrete vector field does not contain a closed path (see Definition
2.50), then the discrete vector field is induced by a discrete Morse func-
tion. For now, we define what we mean by a path of a discrete vector
field 𝑉.
60 2. Discrete Morse theory
Definition 2.46. Let 𝑉 be a discrete vector field on a simplicial complex
𝐾. A 𝑉-path or gradient path is a sequence of simplices
(𝑝+1) (𝑝) (𝑝+1) (𝑝) (𝑝+1) (𝑝) (𝑝+1) (𝑝)
(𝜏−1 , ) 𝜎0 , 𝜏0 , 𝜎1 , 𝜏1 , 𝜎2 , … , 𝜏𝑘−1 , 𝜎𝑘
(𝑝+1)
of 𝐾, beginning at either a critical simplex 𝜏−1 or a regular simplex
(𝑝) (𝑝) (𝑝+1) (𝑝+1) (𝑝) (𝑝)
𝜎0 , such that (𝜎𝑖 , 𝜏𝑖 ) ∈ 𝑉 and 𝜏𝑖−1 > 𝜎𝑖 ≠ 𝜎𝑖 for 0 ≤ 𝑖 ≤
𝑘 − 1. If 𝑘 ≠ 0, then the 𝑉-path is called non-trivial. Note that the very
(𝑝)
last simplex, 𝜎𝑘 , need not be in a pair in 𝑉.
We sometimes use 𝜎0 → 𝜎1 → ⋯ → 𝜎𝑘 to denote a gradient path
from 𝜎0 to 𝜎𝑘 when there is no need to distinguish between the 𝑝- and
(𝑝 + 1)-simplices.
Remark 2.47. A 𝑉-path usually begins at a regular 𝑝-simplex, but we
will sometimes need to consider a 𝑉-path beginning at a critical (𝑝 + 1)-
simplex 𝛽 with 𝜏 > 𝜎0 (e.g. Section 8.4).
Again, we have a very technical definition that conveys a simple con-
cept: starting at a regular 𝑝-simplex (or critical (𝑝 + 1)-simplex), follow
arrows and end at a 𝑝-simplex (regular or critical).
Example 2.48. Consider the discrete vector field 𝑉 on the simplicial
complex 𝐾 below:
𝑣0 𝑣4
𝑒2 𝑒4
𝑓1 𝑒3
𝑒0 𝑒5
𝑣2 𝑣3
𝑒1 𝑒6
𝑣1 𝑣5
A 𝑉-path simply starts at some regular simplex (a tail of an arrow)
and follows a path of arrows. So some examples of 𝑉-paths above are
• 𝑒0 , 𝑓1 , 𝑒2
• 𝑣2 , 𝑒3 , 𝑣3 , 𝑒4 , 𝑣4
• 𝑣2
• 𝑒6 , 𝑣3 , 𝑒4 , 𝑣4
2.2. Gradient vector fields 61
• 𝑒0
• 𝑣1 , 𝑒1 , 𝑣2 , 𝑒3 , 𝑣3
• 𝑣1 , 𝑒1 , 𝑣2 , 𝑒3 , 𝑣3 , 𝑒4 , 𝑣4 , 𝑒5 , 𝑣5
Note that the sequence 𝑣2 , 𝑒3 , 𝑣3 , 𝑒4 , 𝑣4 is still considered a 𝑉-path
even though it is only a subset of the 𝑉-path given by 𝑣1 , 𝑒1 , 𝑣2 , 𝑒3 , 𝑣3 ,
𝑒4 , 𝑣4 , 𝑒5 , 𝑣5 . A 𝑉-path that is not properly contained in any other 𝑉-
path is called maximal. This example contains two maximal 𝑉-paths,
and Example 2.36 also contains two maximal 𝑉-paths.
Exercise 2.49. How many maximal 𝑉-paths are in Problem 2.40?
(𝑝) (𝑝)
Definition 2.50. A 𝑉-path beginning at 𝜎0 is said to be closed if 𝜎𝑘 =
(𝑝)
𝜎0 .
In Example 2.45, we see that 𝑎, 𝑎𝑏, 𝑏, 𝑏𝑐, 𝑐, 𝑐𝑎, 𝑎 is a closed 𝑉-path.
Now we have the language and notation to characterize those discrete
vector fields which are also gradient vector fields induced by some dis-
crete Morse function.
Theorem 2.51. A discrete vector field 𝑉 is the gradient vector field of a
discrete Morse function if and only if the discrete vector field 𝑉 contains
no non-trivial closed 𝑉-paths.
The proof of this will be postponed until Section 2.2.2 on Hasse dia-
grams, which will allow us to grasp the concept more easily.
2.2.1. Relationship to Forman equivalence. We saw in Example 2.30
two discrete Morse functions that turned out to be Forman equivalent.
What happens if we look at their gradient vector fields?
Exercise 2.52. Compute the gradient vector fields induced by the dis-
crete Morse functions in Example 2.30.
If you computed the gradient vector fields in the previous exercise
correctly, they should be the same. This is no coincidence, as the follow-
ing theorem, due to Ayala et al. [9, Theorem 3.1], shows that the gradient
vector field characterizes Forman-equivalent discrete Morse functions.
62 2. Discrete Morse theory
Theorem 2.53. Two discrete Morse functions 𝑓 and 𝑔 defined on a sim-
plicial complex 𝐾 are Forman equivalent if and only if 𝑓 and 𝑔 induce
the same gradient vector field.
Proof. For the forward direction, let 𝑓, 𝑔 ∶ 𝐾 → ℝ be Forman equiva-
lent so that if 𝜎(𝑝) < 𝜏(𝑝+1) , then 𝑓(𝜎) < 𝑓(𝜏) if and only if 𝑔(𝜎) < 𝑔(𝜏).
Thus 𝑓(𝜎) ≥ 𝑓(𝜏) if and only if 𝑔(𝜎) ≥ 𝑔(𝜏), and hence (𝜎, 𝜏) ∈ 𝑉𝑓 if and
only if (𝜎, 𝜏) ∈ 𝑉𝑔 .
Now suppose that 𝑓 and 𝑔 induce the same gradient vector field on
𝐾; i.e., 𝑉𝑓 = 𝑉𝑔 =∶ 𝑉. Using Lemma 2.24, any simplex of 𝐾 is either
critical or in exactly one pair in 𝑉. Suppose 𝜎(𝑝) < 𝜏(𝑝+1) . We need to
show that 𝑓(𝜎) ≥ 𝑓(𝜏) if and only if 𝑔(𝜎) ≥ 𝑔(𝜏). We consider the cases
below.
(a) Suppose (𝜎, 𝜏) ∈ 𝑉. This implies that 𝑓(𝜎) ≥ 𝑓(𝜏) and 𝑔(𝜎) ≥
𝑔(𝜏).
(b) Suppose 𝜎 is not in a pair in 𝑉 while 𝜏 is in a pair in 𝑉. Since
𝜎 is not in a pair in 𝑉, it is critical for both functions, so it sat-
isfies 𝑓(𝜎) < 𝑓(𝜏) and 𝑔(𝜎) < 𝑔(𝜏). The exact same conclusion
follows from the supposition that 𝜎 is in a pair in 𝑉 and 𝜏 is not
in a pair in 𝑉.
(c) Suppose 𝜎 and 𝜏 are in different pairs in 𝑉. Then 𝑓(𝜎) < 𝑓(𝜏)
and 𝑔(𝜎) < 𝑔(𝜏).
(d) Suppose neither 𝜎 nor 𝜏 are in any pair in 𝑉. Then they are
both critical, so 𝑓(𝜎) < 𝑓(𝜏) and 𝑔(𝜎) < 𝑔(𝜏).
In all cases, 𝑓(𝜎) ≥ 𝑓(𝜏) if and only if 𝑔(𝜎) ≥ 𝑔(𝜏). □
As a corollary, we have the following:
Corollary 2.54. Any two Forman-equivalent discrete Morse functions 𝑓
and 𝑔 defined on a simplicial complex 𝐾 have the same critical simplices.
Problem 2.55. Prove Corollary 2.54.
2.2. Gradient vector fields 63
Example 2.56. We give an example to show that the converse of Corol-
lary 2.54 is false. Consider the complex 𝐾 with two discrete vector fields
given by
𝑣 𝑣
By Theorem 2.53, the two discrete Morse functions are not Forman
equivalent, yet they have the same critical simplices (namely, the one
0-simplex 𝑣).
2.2.2. The Hasse diagram. Recall that a relation 𝑅 on a set 𝐴 is a sub-
set of 𝐴 × 𝐴; i.e., 𝑅 ⊆ 𝐴 × 𝐴. We write 𝑎𝑅𝑏 if and only if (𝑎, 𝑏) ∈ 𝑅. If 𝑎𝑅𝑎
for all 𝑎 ∈ 𝐴, then 𝑅 is called reflexive. If 𝑎𝑅𝑏 and 𝑏𝑅𝑐 implies that 𝑎𝑅𝑐
for all 𝑎, 𝑏, 𝑐 ∈ 𝐴, then 𝑅 is called transitive. If 𝑎𝑅𝑏 and 𝑏𝑅𝑎 implies
that 𝑎 = 𝑏 for all 𝑎, 𝑏 ∈ 𝐴, then 𝑅 is called antisymmetric.
Definition 2.57. A partially ordered set or poset is a set 𝑃 along with
a reflexive, antisymmetric, transitive relation usually denoted by ≤.
Example 2.58. Let 𝑃 = ℝ under the relation ≤. This is reflexive since
for every 𝑎 ∈ ℝ, 𝑎 ≤ 𝑎. Suppose 𝑎 ≤ 𝑏 and 𝑏 ≤ 𝑎; then, by definition,
𝑎 = 𝑏. Finally, it is easily seen that if 𝑎 ≤ 𝑏 and 𝑏 ≤ 𝑐, then 𝑎 ≤ 𝑐.
Example 2.59. Let 𝑋 be a finite set. It is easy to prove that the power
set of 𝑋, 𝒫(𝑋), is a poset under subset inclusion. Geometrically, we
can visualize the relations in a poset. We will take 𝑋 = {𝑎, 𝑏, 𝑐}. Write
down all the elements of 𝒫(𝑋) and draw a line between a simplex and
64 2. Discrete Morse theory
a codimension-1 face, keeping sets of the same cardinality in the same
row.
{𝑎, 𝑏, 𝑐}
HH
vvv HH
HH
v
vvv HH
H
vv
{𝑎, 𝑏} {𝑎, 𝑐} {𝑏, 𝑐}
HH HH
HH vvv HH vvv
vHvH vHvH
vv HHH vv HHH
vv vv
{𝑎} I {𝑏} {𝑐}
II v
II vvv
II vv
II vv
I vvv
∅
The above picture is called a Hasse diagram. To avoid confusion,
we refer to a point in the Hasse diagram as a node. We may associate
a Hasse diagram to any simplicial complex 𝐾 as follows: the Hasse di-
agram [148] of 𝐾, denoted by ℋ𝐾 or ℋ, is defined as the partially or-
dered set of simplices of 𝐾 ordered by the face relations; that is, ℋ is a
1-dimensional simplicial complex (or graph) such that there is a 1–1 cor-
respondence between the nodes of ℋ and the simplices of 𝐾. With an
abuse of notation, if 𝜎 ∈ 𝐾, we write 𝜎 ∈ ℋ for the corresponding node.
Finally, there is an edge between two simplices 𝜎, 𝜏 ∈ ℋ if and only if 𝜏
is a codimension-1 face of 𝜎. We organize the picture by placing nodes
in rows such that every node in the same row corresponds to a simplex
of the same dimension. In general, we’ll let ℋ(𝑖) denote the nodes of ℋ
corresponding to the 𝑖-simplices of 𝐾. We refer to ℋ(𝑖) as level 𝑖.
Exercise 2.60. Let 𝐾 be a simplicial complex. Prove that the Hasse dia-
gram of 𝐾 defines a partially ordered set.
Example 2.61. We considered the simplicial complex 𝐾 given by
𝑣0 𝑣4
𝑣2 𝑣3
𝑣1 𝑣5
2.2. Gradient vector fields 65
in Example 2.48. Its Hasse diagram is given by
𝑣0 𝑣1 𝑣2
~ @@
~~ @@
~~ @@
~~ @
𝑣0 𝑣1 𝑣0 𝑣2 𝑣𝑣 𝑣2 𝑣3 𝑣3 𝑣4 𝑣𝑣 𝑣 𝑣
@@ ~ @@ ~ 1 2 === << 3 5<< 4 5
@@~~ @@~~ = << <<
~~ @@@ ~~~ @@@ == < <
~~ ~ = << <<
𝑣0 𝑣1 𝑣2 𝑣3 𝑣4 𝑣5
Suppose we have a discrete Morse function on 𝐾. Is there a way
to put a corresponding discrete Morse function on ℋ𝐾 ? Conversely, is
there a way to put a discrete Morse function on the Hasse diagram that
yields a discrete Morse function on 𝐾? One way to figure this out is to
investigate the Hasse diagram of the non-discrete Morse function from
Example 2.45.
Example 2.62. Recall that Example 2.45 gave a discrete vector field on
a simplicial complex 𝐾 which did not correspond to a discrete Morse
function.
𝑐 𝑑 𝑒 𝑓
𝑎 𝑏
This was because of the closed 𝑉-path 𝑎, 𝑎𝑏, 𝑏, 𝑏𝑐, 𝑐, 𝑎𝑐, 𝑎. The Hasse
diagram of 𝐾 is given by
𝑎𝑏 𝑎𝑐 𝑏𝑐 𝑐𝑑 𝑑𝑒 𝑒𝑓
𝑎 𝑏 𝑐 𝑑 𝑒 𝑓
66 2. Discrete Morse theory
In order to transfer this discrete vector field on to ℋ𝐾 , we draw an
upward-pointing arrow along an edge in the Hasse diagram between a
pair in 𝑉. An arrow on a Hasse diagram is said to be upward if it is
directed from a node in ℋ(𝑖) to a node in ℋ(𝑖 + 1). Since we’ll need it
below, we also define an arrow to be downward if it is directed from a
node in ℋ(𝑖 + 1) to a node in ℋ(𝑖). If you draw all the upward arrows
corresponding to pairs in 𝑉 on ℋ, it is still unclear how you could tell
from the Hasse diagram that this does not correspond to a discrete Morse
function.
𝑎𝑏 𝑎𝑐 𝑏𝑐 𝑐𝑑 𝑑𝑒 𝑒𝑓
𝑎 𝑏 𝑐 𝑑 𝑒 𝑓
A slight modification will clarify this. In addition to drawing an
upward-pointing arrow between pairs of elements in 𝑉, draw a down-
ward-pointing arrow on all the other edges. The resulting Hasse diagram
should look like this:
𝑎𝑏 𝑎𝑐 𝑏𝑐 𝑐𝑑 𝑑𝑒 𝑒𝑓
𝑎 𝑏 𝑐 𝑑 𝑒 𝑓
Following the direction of the arrows starting at node 𝑎, we traverse
the path 𝑎, 𝑎𝑏, 𝑏, 𝑏𝑐, 𝑐, 𝑎𝑐, 𝑎, bringing us back to where we started. It is
this “directed cycle” that precludes this directed Hasse diagram from cor-
responding to a discrete Morse function.
Definition 2.63. Let 𝐾 be a simplicial complex and 𝑉𝐾 = 𝑉 a discrete
vector field on 𝐾. The directed Hasse diagram induced by 𝑉, denoted
by ℋ𝑉 , is the Hasse diagram ℋ𝐾 of 𝐾 with an arrow on every edge of
2.2. Gradient vector fields 67
ℋ𝑉 . An arrow points upward if and only if the two nodes of the edge
are an ordered pair in 𝑉. Viewing the directed Hasse diagram as a 1-
dimensional simplicial complex, a non-trivial closed 𝑉-path of ℋ𝑉 is
called a directed cycle.
Remark 2.64. The directed Hasse diagram is sometimes called the mod-
ified Hasse diagram (e.g. in [99]).
Example 2.65. Let 𝐾 = Δ2 with gradient vector field given below:
𝑣2
𝑣1 𝑣3
Its directed Hasse diagram is given by
𝑣1 𝑣2 𝑣3
𝑣1 𝑣2 𝑣1 𝑣3 𝑣2 𝑣3
𝑣1 𝑣2 𝑣3
Notice that the 0-, 1-, and 2-simplices of 𝐾 are organized in levels.
68 2. Discrete Morse theory
Problem 2.66. Draw the directed Hasse diagram induced by the gradi-
ent vector field on
𝑣4 𝑣3
𝑣1 𝑣2
Does it contain a directed cycle?
Lemma 2.67. Let 𝐾 be a simplicial complex and 𝑉 a discrete vector field
on 𝐾. If the Hasse diagram induced by 𝑉 contains a directed cycle, then
the directed cycle is contained in exactly two levels.
Problem 2.68. Prove Lemma 2.67.
Theorem 2.69. Let 𝐾 be a simplicial complex, 𝑉𝐾 = 𝑉 a discrete vector
field on 𝐾, and ℋ𝑉 the corresponding directed Hasse diagram. There are
no non-trivial closed 𝑉-paths if and only if there are no directed cycles
in ℋ𝑉 .
Proof. We start with the backward direction. Suppose by way of con-
tradiction that 𝑉 contains a closed 𝑉-path, say
(𝑝) (𝑝+1) (𝑝) (𝑝+1) (𝑝) (𝑝+1) (𝑝) (𝑝)
𝛼0 , 𝛽0 , 𝛼1 , 𝛽1 , 𝛼2 , … , 𝛽𝑘 , 𝛼𝑘+1 = 𝛼0 .
(𝑝)
We find a directed cycle in ℋ𝑉 . Starting at 𝛼0 ∈ ℋ𝑉 , traverse the
(𝑝+1) (𝑝) (𝑝+1)
upward arrow to 𝛽0 . The arrow points upward since (𝛼0 , 𝛽0 )∈
(𝑝+1) (𝑝)
𝑉. Now the arrow from 𝛽0 to 𝛼1 points downward since otherwise
(𝑝) (𝑝+1) (𝑝+1)
(𝛼1 , 𝛽0 ) ∈ 𝑉, contradicting the fact that 𝛽0 is in at most one pair
of 𝑉. Continuing in this manner, we traverse a directed path that begins
(𝑝)
and ends at 𝛼0 , which is a directed cycle.
To see the forward direction, observe that if, by contradiction, there
is a directed cycle in ℋ𝑉 , then Lemma 2.67 guarantees that it is con-
tained in exactly two levels. Thus, using the argument in the proceeding
2.2. Gradient vector fields 69
paragraph, following a directed cycle in the Hasse diagram will yield a
non-trivial closed 𝑉-path in 𝐾. This completes the proof. □
An immediate corollary of Theorem 2.69 is that Forman-equivalent
discrete Morse functions have the same directed Hasse diagram.
Corollary 2.70. Let 𝑓, 𝑔 ∶ 𝐾 → ℝ be discrete Morse functions. Then 𝑓
and 𝑔 are Forman equivalent if and only if ℋ𝑉𝑓 = ℋ𝑉𝑔 .
Proof. Apply Theorem 2.53 and Theorem 2.69. □
We need one more lemma. It is purely graph theoretic. See [20,
Proposition 1.4.3] for a proof.
Lemma 2.71. Let 𝐺 be a 1-dimensional simplicial complex and 𝑉 a dis-
crete vector field on 𝐺. Then there is a real-valued function of the ver-
tices that is strictly decreasing along each directed path if and only if 𝐺
does not have a directed cycle.
As promised, we now prove Theorem 2.51.
Proof of Theorem 2.51. A discrete vector field 𝑉 is the gradient vector
field of a discrete Morse function 𝑓 if and only if the corresponding nodes
on ℋ𝑉 with the values given by 𝑓 yield a real-valued function which is
strictly increasing along each directed path, and this is the case if and
only if ℋ𝑉 has no directed cycle (Lemma 2.71), which is the case if and
only if 𝑉 contains no non-trivial closed 𝑉-paths (Theorem 2.69). □
Problem 2.72. Prove Lemma 2.71.
In light of Theorem 2.51, we will use the term “gradient vector field”
to mean either that which is induced by a discrete Morse function (the
proper sense of the term) or a discrete vector field with no non-trivial
closed 𝑉-paths.
Problem 2.73. Could the following be the directed Hasse diagram ℋ𝐾
of a simplicial complex 𝐾 induced by a discrete Morse function on 𝐾? A
70 2. Discrete Morse theory
discrete vector field on 𝐾? In both cases, justify your answer. (Note that
the downward arrows are suppressed to avoid clutter.)
𝑣0 𝑣1 𝑣2 𝑣0 𝑣2 𝑣3
𝑣0 𝑣1 𝑣0 𝑣2 𝑣0 𝑣3 𝑣1 𝑣2 𝑣2 𝑣3 𝑣2 𝑣4 𝑣3 𝑣4
𝑣0 𝑣1 𝑣2 𝑣3 𝑣4
2.2.3. Generalized discrete Morse functions. As often happens in
mathematics, an equivalent definition may allow for a more general un-
derstanding. Remark 2.42 tells us that a discrete Morse function always
partitions the simplices of a simplicial complex. However, the way the
partition can look is very restricted: the sets of the partition must be ei-
ther of size two (regular pairs) or of size one (critical simplex). What if
we allow more general partitions using sets of any size? This idea was
first suggested in [72], and nice applications in geometric topology are
found in [25, 57] and generalized further by the latter authors in [26].
Here we content ourselves with giving the basic definitions and results.
As a warmup, recall explicitly how a discrete Morse function induces a
partition.
Exercise 2.74. Let 𝑓 be the discrete Morse function from Example 2.39.
Write down the partition induced by 𝑓.
Definition 2.75. Let 𝐾 be a simplicial complex. For any 𝛼, 𝛽 ∈ 𝐾, the
interval [𝛼, 𝛽] is the subset of 𝐾 given by
[𝛼, 𝛽] ∶= {𝛾 ∈ 𝐾 ∶ 𝛼 ⊆ 𝛾 ⊆ 𝛽}.
Exercise 2.76. Show that [𝛼, 𝛽] ≠ ∅ if and only if 𝛼 ⊆ 𝛽.
2.2. Gradient vector fields 71
Example 2.77. In Example 2.61, we found the following Hasse diagram
ℋ𝐾 :
𝑣0 𝑣1 𝑣2
~ @@
~~ @@
~~ @@
~~ @
𝑣0 𝑣1 𝑣0 𝑣2 𝑣𝑣 𝑣2 𝑣3 𝑣3 𝑣4 𝑣𝑣 𝑣 𝑣
@@ ~ @@ ~ 1 2 === << 3 5<< 4 5
@@~~ @@~~ == << <<
~~~ @@@ ~~~ @@@ == <<< <<<
~ ~
𝑣0 𝑣1 𝑣2 𝑣3 𝑣4 𝑣5
We illustrate Definition 2.75 by computing intervals. For example,
[𝑣0 , 𝑣0 𝑣1 𝑣2 ] = {𝑣0 , 𝑣0 𝑣1 , 𝑣0 𝑣2 , 𝑣0 𝑣1 𝑣2 }, [∅, 𝑣4 𝑣5 ] = {∅, 𝑣4 , 𝑣5 , 𝑣4 𝑣5 },
[𝑣4 𝑣5 , 𝑣4 𝑣5 ] = {𝑣4 𝑣5 }, and [𝑣1 , 𝑣2 ] = ∅.
Any partition 𝑊 of 𝐾 into intervals is called a generalized discrete
vector field. The terminology is justified by the fact that every discrete
Morse function may be viewed as a generalized discrete vector field. In-
deed, if 𝑓 ∶ 𝐾 → ℝ is a discrete Morse function, we know by Remark
2.42 that under 𝑓, every simplex of 𝐾 is either critical or part of a free
pair (and not both). For each critical simplex 𝜎, choose [𝜎, 𝜎] = {𝜎}. For
each free pair 𝛼 < 𝛽, choose [𝛼, 𝛽] = {𝛼, 𝛽}. This forms a partition of 𝐾
and hence is a generalized discrete vector field.
Exercise 2.78. Find a generalized discrete vector field on the simplicial
complex in Example 2.77.
Problem 2.79. Give an example to show why “generalized gradient vec-
tor field” would not be an appropriate name for the above definition. You
may need to recall the distinction between gradient vector field and dis-
crete vector field.
Fix a generalized discrete vector field 𝑊 on 𝐾 and let 𝑓 ∶ 𝐾 → ℝ
be a function (not necessarily a discrete Morse function) which satisfies
𝑓(𝛼) ≤ 𝑓(𝛽) whenever 𝛼 < 𝛽, with 𝑓(𝛼) = 𝑓(𝛽) if and only if there
exists an interval 𝐼 ∈ 𝑊 such that 𝛼, 𝛽 ∈ 𝐼. Then 𝑓 is called a general-
ized discrete Morse function and 𝑊 its generalized discrete vector
field. An interval containing only one simplex 𝜎 is singular, and 𝜎 is
called a critical simplex, with 𝑓(𝜎) a critical value of 𝑓. Note that two
intervals may share the same value.
72 2. Discrete Morse theory
Problem 2.80. Let 𝐾 = Δ𝑛 be the 𝑛-simplex and let 𝑓 ∶ 𝐾 → ℝ be
defined by 𝑓(𝜎) = 0 for every 𝜎 ∈ 𝐾. Find a partition 𝑊 of 𝐾 that makes
𝑓 a generalized discrete Morse function.
Example 2.81. Let 𝐾 be the simplicial complex
𝑣1 𝑣2 𝑣3
𝑣8 𝑣4
𝑣5
𝑣7 𝑣6
and 𝑊 the partition into intervals [𝑣1 , 𝑣1 𝑣2 𝑣8 ], [𝑣2 , 𝑣2 𝑣3 𝑣4 ], [𝑣3 , 𝑣3 𝑣4 ],
[𝑣4 , 𝑣4 𝑣5 𝑣6 ], [𝑣5 , 𝑣5 𝑣6 ], [𝑣6 , 𝑣6 𝑣7 𝑣8 ], [𝑣2 𝑣8 ], [𝑣7 , 𝑣7 𝑣8 ], and [𝑣8 , 𝑣8 ]. Define
𝑓 by
8 6 5
8 6
8 6
8 7 6 5
0 4
1 2 4 4
2 4
2 3 3
1 2
2.3. Random discrete Morse theory 73
Then 𝑓 is a generalized discrete Morse function, but clearly not a
discrete Morse function.
Problem 2.82. Let 𝐾, 𝐿 ⊆ 𝑀 be subcomplexes of a simplicial complex
𝑀, and let 𝑓 ∶ 𝐾 → ℝ and 𝑔 ∶ 𝐿 → ℝ be generalized discrete Morse
functions with gradients 𝑉 and 𝑊, respectively. Prove that (𝑓 + 𝑔) ∶ 𝐾 ∩
𝐿 → ℝ is also a generalized discrete Morse function, with gradient given
by 𝑈 ∶= {𝐼 ∩ 𝐽 ∶ 𝐼 ∈ 𝑉, 𝐽 ∈ 𝑊, 𝐼 ∩ 𝐽 ≠ ∅}.
There are at least two ways in which generalized discrete Morse the-
ory can prove helpful. One is that it can detect and perform multiple col-
lapses at once. This will be made precise in Corollary 4.30, but the idea
can be seen in Example 2.81. All simplices labeled 8 can be removed as a
sequence of two elementary collapses. The edge labeled 7 is critical, but
the simplices labeled 6 are again a sequence of two elementary collapses
and hence may be removed. The pair labeled 5 is a free pair which may
be removed, etc. The point is that we obtain a collapsing theorem by
which all the simplices with the same value may be removed simultane-
ously. This is especially helpful for computational purposes. We will see
a similar idea of how to perform simultaneous collapses in Chapter 10.
Another situation in which generalized discrete Morse theory is
helpful is when the functions one is working with don’t satisfy the prop-
erties of a discrete Morse function but are generalized discrete Morse
functions. This was the case examined in the paper [26] by U. Bauer and
H. Edelsbrunner. They showed that certain radius functions of Čech and
Delaunay complexes are generalized discrete Morse functions and used
this to prove a collapsing theorem.
2.3. Random discrete Morse theory
In this brief section, we introduce yet another way to view a discrete
Morse function. This point of view is more algorithmic and lends itself
well to some computational purposes.
74 2. Discrete Morse theory
2.3.1. Discrete Morse vectors and optimality.
Definition 2.83. Let 𝐾 be an 𝑛-dimensional simplicial complex and
𝑓
𝑓 ∶ 𝐾 → ℝ a discrete Morse function, and let 𝑚𝑖 (or just 𝑚𝑖 if the func-
tion 𝑓 is clear) denote the number of critical 𝑖-simplices of 𝑓. Define
𝑓 𝑓 𝑓 𝑓
𝑓 ⃗ ∶= (𝑚0 , 𝑚1 , 𝑚2 , … , 𝑚𝑛 ) to be the discrete Morse vector of 𝑓.
Exercise 2.84. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function. Recall
that the 𝑐-vector of 𝐾 is the vector 𝑐 ⃗ = (𝑐0 , 𝑐1 , … , 𝑐dim(𝐾) ) where 𝑐𝑖 is
the number of simplices of 𝐾 of dimension 𝑖. Show that 𝑓 ⃗ ≤ 𝑐,⃗ meaning
𝑓
that 𝑚 ≤ 𝑐𝑖 for all 𝑖. Can it ever be the case that 𝑓 ⃗ = 𝑐?⃗
𝑖
Note that by Problem 2.26, we will always have 𝑚0 ≥ 1.
Problem 2.85. Prove that if a simplicial complex 𝐾 is collapsible,
then there is a discrete Morse function with discrete Morse vector
(1, 0, 0, 0, … , 0).
Problem 2.86. If 𝐾 ↗ 𝐿 through a series of elementary expansions and
𝑓 ∶ 𝐾 → ℝ is a discrete Morse function with discrete Morse vector 𝑓,⃗
prove that there exists a discrete Morse function 𝑔 ∶ 𝐿 → ℝ such that
𝑔|𝐾 = 𝑓 and 𝑔⃗ = 𝑓.⃗ Here 𝑔|𝐾 denotes the restriction of the function 𝑔
to the domain 𝐾.
As you worked through Exercise 2.84, it may have occurred to you
that for a fixed simplicial complex 𝐾, the values that 𝑓 ⃗ can take can vary
quite a bit. This raises the question of how one might define a “best
possible” 𝑓-vector. The following definition is one such attempt.
Definition 2.87. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function with dis-
𝑓 𝑓 𝑓 𝑓
crete Morse vector 𝑓 ⃗ ∶= (𝑚0 , 𝑚1 , 𝑚2 , … , 𝑚𝑛 ). Then 𝑓 ⃗ is called optimal
𝑛 𝑓
if ∑𝑖=0 𝑚𝑖 is minimal in the sense that if 𝑔 ∶ 𝐾 → ℝ is any other discrete
𝑛 𝑓 𝑛 𝑔
Morse function on 𝐾, then ∑𝑖=0 𝑚𝑖 ≤ ∑𝑖=0 𝑚𝑖 .
2.3. Random discrete Morse theory 75
Example 2.88. Let 𝑓 ∶ 𝐾 → ℝ be given by
4 0 5
3
12 9
4 6 8 7
13
3 3 2 5 6 10 1
𝑓 𝑓 𝑓
By inspection, 𝑚0 = 3, 𝑚1 = 5, and 𝑚2 = 1 so that the discrete
Morse vector is given by 𝑓 ⃗ = (3, 5, 1). Can we find a discrete Morse
function with fewer critical values? It isn’t too hard. Let 𝑔 ∶ 𝐾 → ℝ be
given by
12 0 2
1
8 7
11 13 9 3
6
10 9 8 7 6 5 4
𝑔 𝑔 𝑔
Now it is easy to check that 𝑚0 = 1, 𝑚1 = 3, and 𝑚2 = 0 so that
𝑔⃗ = (1, 3, 0). This is clearly better than 𝑓.⃗ Can we do even better? It
seems like this is the best we can do. We’ll see in Section 4.1.2 that this
discrete Morse vector is indeed optimal for our simplicial complex.
76 2. Discrete Morse theory
Although every simplicial complex has an optimal discrete Morse
function, finding one can be a challenge. Lewiner et al. [109], for exam-
ple, present a linear-time algorithm for finding optimal discrete Morse
functions on a special class of 2-dimensional simplicial complexes. Note
that an optimal discrete Morse vector need not be unique. Indeed, K.
Adiprasito, B. Benedetti, and F. Lutz constructed a 3-dimensional
simplicial complex with optimal discrete Morse vectors (1, 1, 1, 0) and
(1, 0, 1, 1). In fact, they were able to prove the following more general
result.
Theorem 2.89 ([4, Theorem 3.3]). For every 𝑑 ≥ 3 there exists a non-
collapsible 𝑑-dimensional simplicial complex that has two distinct opti-
mal discrete Morse vectors
(1, 0, … , 0, 1, 1, 0) and (1, 0, … , 0, 0, 1, 1).
2.3.2. Benedetti-Lutz algorithm. In this section, we study a “random-
ized” approach to discrete Morse theory. Let 𝐾 be a simplicial com-
plex. How would one put a random discrete Morse function on 𝐾? One
could begin by labeling simplices at random, but then there is no way to
guarantee that the resulting labeling is actually a discrete Morse func-
tion. Any attempt to tweak the values in order to guarantee a discrete
Morse function would seem to either remove the randomness or result
in something too complicated in practice. The Benedetti-Lutz or B-
L algorithm [33] avoids these problems by viewing the discrete Morse
function in terms of its geometry. The idea is simple: Given a simplicial
complex 𝐾, perform one of two moves—either 1) remove a free pair or
2) remove a top-dimensional facet, with preference being given to op-
tion 1). When 2) is performed, the dimension of the removed facet is
recorded in a discrete Morse vector. Formally, the B-L algorithm is given
in Algorithm 1.
2.3. Random discrete Morse theory 77
Algorithm 1 B-L algorithm
Input: A 𝑑-dimensional abstract finite simplicial complex 𝐾 given
by its list of facets.
Output: The resulting discrete Morse vector (𝑐0 , 𝑐1 , … , 𝑐𝑑 ).
1 Initialize 𝑐0 = 𝑐1 = ⋯ = 𝑐𝑑 = 0.
2 If the complex is empty, STOP. Otherwise, go to Step 3.
3 If there is a free codimension-1 face, go to Step 4. If not, go to
Step 5.
4 Pick a free codimension-1 coface uniformly at random and
delete it with the unique face that contains it. Go back to Step
2.
5 Pick a top-dimensional 𝑖-face uniformly at random and delete
it, and set 𝑐𝑖 = 𝑐𝑖 + 1. Go back to Step 2.
Problem 2.90. Let 𝐾 be a simplicial complex. Show how the B-L algo-
rithm yields a discrete Morse function on 𝐾.
Definition 2.91. The discrete Morse spectrum of a simplicial com-
plex 𝐾 is the collection of all possible discrete Morse vectors produced by
the B-L algorithm along with the distribution of the respective probabili-
ties. If 𝑝𝑖 is the probability of obtaining discrete Morse vector 𝑐𝑖⃗ , then the
discrete Morse spectrum of 𝐾 is denoted by {𝑝1 − 𝑐1⃗ , 𝑝2 − 𝑐2⃗ , … , 𝑝𝑘 − 𝑐𝑘⃗ }
for some 𝑘 ∈ ℕ, where ∑ 𝑝𝑖 = 1 and 𝑝𝑖 ≠ 0 for any 𝑖.
Example 2.92. Let 𝐾 be the simplicial complex of Example 2.81. The B-
L algorithm removes free pairs from this complex, and it is not difficult to
see that any sequence of removals of free pairs will result in the simplicial
complex
78 2. Discrete Morse theory
1
Any edge is chosen with probability and the resulting output discrete
4
Morse vector is (1, 1). Thus the discrete Morse spectrum for this simpli-
cial complex is {1 − (1, 1)}.
Problem 2.93. Compute the discrete Morse spectrum of the following
simplicial complex:
Given your work in Problem 2.90, we can now see how Algorithm 1
gives a discrete Morse function. The algorithm was introduced by B.
Benedetti and F. Lutz in the paper cited above. They studied this con-
cept extensively in terms of its usefulness for testing the complexity of
certain simplicial complexes. Such an endeavor is beyond the scope of
this book (but see for example [4, 33]). However, there are some simple
observations that we can make.
Proposition 2.94. Let 𝐾 be a simplicial complex. Suppose that the
discrete Morse vector output on a single run of the B-L algorithm is
(1, 0, 0, … , 0). Then 𝐾 is collapsible.
Exercise 2.95. Prove Proposition 2.94.
The converse of Proposition 2.94 is false. In fact, it is worth noting
that we should not always expect to find an optimal discrete Morse func-
tion this way, even after many, many runs. The next proposition spells
this out explicitly, while in particular providing an example to show that
the converse of Proposition 2.94 is false.
Proposition 2.96. For every 𝜖 > 0, there exists a simplicial complex
𝐺𝜖 such that the probability that the B-L algorithm yields an optimal
discrete Morse vector of 𝐺𝜖 is less than 𝜖.
2.3. Random discrete Morse theory 79
6
Proof. Let 𝜖 > 0 be given and choose 𝑛 ∈ ℕ such that < 𝜖. Consider
𝑛+6
the simplicial complex given by
with at least 𝑛 edges between the two cycles. Now a necessary condition
for a random discrete Morse vector to be optimal is that the first removal
of an edge needs to be one of the six edges in the two triangles. The
6
probability of this happening is less than so that the probability of
𝑛+6
6
failing to obtain one of the six edges is less than . Thus the probability
𝑛+6
6
of obtaining an optimal discrete Morse vector is less than < 𝜖. □
𝑛+1
Chapter 3
Simplicial homology
This chapter serves as a friendly and working introduction to simplicial
homology, a theory that has been well established for many decades. The
reader familiar with simplicial homology may safely skip it.
Homology is not just an extremely interesting and important tool in
topology—it has a beautiful connection with discrete Morse theory as
well (e.g. Sections 4.1 and 8.4). The idea of homology is to construct a
rigorous theory that both counts the number and identifies the type of
holes in a space. For example, a circle has one hole and a sphere has
no holes. However, a sphere does seem to have a “higher-dimensional”
type of hole, since the entire sphere encloses some 3-dimensional space
or a void (think about the air inside of a basketball). A torus (the outside
skin of a doughnut) seems to have both kinds of holes. There are many,
many versions of homology, computed in different ways and for use on
different kinds of objects. To give a name to what we study, for any inte-
ger 𝑛 ≥ 0, we will study a special kind of “function”1 , called simplicial
(unreduced2 ), homology and denoted by 𝐻𝑛 (𝐾), from the collection
of finite abstract simplicial complexes to the collection of vector spaces.
We will study another way to compute homology and compare it with
1
i.e., functor
2
There is a fairly technical distinction between “reduced” and “unreduced” homology. We won’t
be concerned with it, other than to note that we are using the “unreduced” kind.
81
82 3. Simplicial homology
simplicial homology in Section 8.2. What makes homology a kind of “su-
per function” is that it is not a function that simply takes in a number
and outputs another number. Rather, this function takes in a simplicial
complex and gives out infinitely many vector spaces. To illustrate, let 𝐾
be the simplicial complex from Example 2.88 (this will be our running
example in Section 3.2). Then under homology, 𝐾 would be associated
to the following sequence of vector spaces:
𝐾 ⟼ [… , 𝕜04 , 𝕜03 , 𝕜02 , 𝕜31 , 𝕜10 ].
We can interpret the vector space 𝕜𝑗𝑖 as saying that 𝐾 has 𝑖 holes in di-
mension 𝑗. So the way to interpret this sequence of vector spaces is to
say that 𝐾 has 0 holes in dimension 3 (and 0 holes in every dimension
greater than 3) while it has 3 holes in dimension 1 and 1 hole in dimen-
sion 0. In Section 3.2 we’ll see how to actually derive this sequence of
vector spaces. First we will review some of the needed linear algebra.
3.1. Linear algebra
In this section, we develop a working knowledge of tools from linear
algebra that allow us to define homology. It is helpful but not necessary
to have taken a course in linear algebra. The needed understanding will
be gained through practice with computation. We are mostly interested
in vector spaces and the rank-nullity theorem. Consider the simplicial
complex given below.
𝑎 𝑏
𝑑 𝑐
Now whatever our theory of holes is, it had better detect exactly one
hole in the simplicial complex above. What exactly is the hole? It seems
like it ought to be detected by the sequence of simplices 𝑎𝑏, 𝑏𝑐, 𝑐𝑑, 𝑎𝑑.
3.1. Linear algebra 83
Now if a hole is determined by a sequence of simplices like the ones just
given, we will need to somehow consider all the possible combinations
of simplices of a fixed dimension. So, for example, we will get sequences
like
• 𝑏𝑐, 𝑑𝑐, 𝑎𝑑
• 𝑏𝑐, 𝑏𝑐, 𝑏𝑐, 𝑏𝑐
• 𝑑𝑐, 𝑏𝑐, 𝑎𝑏, 𝑎𝑑
This ends up giving us too many combinations. Furthermore, some
do not even describe holes (e.g. 𝑏𝑐, 𝑑𝑐, 𝑎𝑑), and some are repeats—the
sequence 𝑐𝑑, 𝑏𝑐, 𝑎𝑏, 𝑎𝑑 is really the same thing as 𝑎𝑏, 𝑏𝑐, 𝑐𝑑, 𝑎𝑑. A formal
system that addresses these problems is a vector space [107, p. 190]. We
won’t give the technical definition here so much as a working definition.
Rather than think of something like 𝑎𝑏, 𝑏𝑐, 𝑐𝑑, 𝑎𝑑 as a sequence, we’ll
think of it as a sum: “𝑎𝑏 + 𝑏𝑐 + 𝑐𝑑 + 𝑎𝑑.” Given the context, there should
be no confusion between 𝑎𝑏 the 1-simplex and 𝑎𝑏 the element of a vector
space. If we make this addition commutative and associative, we then
see that 𝑐𝑑 + 𝑏𝑐 + 𝑎𝑏 + 𝑎𝑑 = 𝑎𝑏 + 𝑏𝑐 + 𝑐𝑑 + 𝑎𝑑. Furthermore, we will
use modulo or mod 2 arithmetic. This means that there are only two
numbers, namely, 0 and 1. They obey the rules 0+1 = 1 and 1+1 = 0. To
be technically correct3 , we would write this as 1+1 ≡ 0 mod 2 and 1+0 ≡
1 mod 2. This system is called the integers modulo 2, denoted by 𝔽2 .
In other words, 𝔽2 = {0, 1} along with the rule that 1 + 1 ≡ 0 mod 2. The
blackboard 𝔽 stands for “field,” an algebraic structure one learns about in
a course in abstract algebra. Furthermore, we have 0⃗ as the “zero vector.”
This is to be distinguished from the number 0. Using this notation, we
see that, for example, 𝑏𝑐 + 𝑏𝑐 + 𝑏𝑐 + 𝑏𝑐 = 4𝑏𝑐 = 0 ⋅ 𝑏𝑐 = 0.⃗
Given all the 1-simplices, we have created the vector space
𝕜4 = {0,⃗ 𝑎𝑏, 𝑏𝑐, 𝑐𝑑, 𝑎𝑑, 𝑎𝑏 + 𝑏𝑐, 𝑎𝑏 + 𝑐𝑑, 𝑎𝑏 + 𝑎𝑑, 𝑏𝑐 + 𝑐𝑑, 𝑏𝑐 + 𝑎𝑑,
𝑐𝑑 + 𝑎𝑑, 𝑎𝑏 + 𝑏𝑐 + 𝑐𝑑, 𝑎𝑏 + 𝑏𝑐 + 𝑎𝑑, 𝑎𝑏 + 𝑐𝑑 + 𝑎𝑑, 𝑏𝑐 + 𝑐𝑑 + 𝑎𝑑,
𝑎𝑏 + 𝑏𝑐 + 𝑐𝑑 + 𝑎𝑑}.
3
The best kind of correct.
84 3. Simplicial homology
We call it 𝕜4 because we started with 4 elements {𝑎𝑏, 𝑏𝑐, 𝑐𝑑, 𝑎𝑑} and
generated a vector space from them—that is, all possible “sums” of those
original 4 elements. The number 4 in this example is the dimension of
the vector space 𝕜4 .
Example 3.1. Consider the two symbols 𝑎 and 𝑏. We can define a 2-
dimensional vector space generated by 𝑎 and 𝑏; specifically, if 𝑐1 , 𝑐2 ∈
{0, 1} = 𝔽2 , the vector space would consist of all elements of the form
𝑐1 𝑎 + 𝑐2 𝑏. We denote this vector space by 𝕜2 , meaning a vector space
generated by 2 objects. For example, 0𝑎 + 1𝑏 ∈ 𝕜2 , which simplifies to
𝑏. Again, as above, we can write down all the elements of 𝕜2 ; that is,
𝕜2 = {0,⃗ 𝑎, 𝑏, 𝑎 + 𝑏}. Notice that this is “closed” under addition in the
sense that the sum of any two elements in 𝕜2 is still in 𝕜2 . For example,
𝑎 + 𝑎 = (1 + 1)𝑎 = 0𝑎 = 0.⃗
The elements 𝑎 and 𝑏 are called basis elements of 𝕜2 . By conven-
⃗ the trivial vector space consisting of only
tion, we say that 𝕜0 = {0},
one element, namely, the zero vector.
Definition 3.2. Let 𝑋 = {𝑒1 , 𝑒2 , … , 𝑒𝑛 } be a set of 𝑛 distinct elements.
The vector space (over 𝔽2 ) generated by 𝑋, denoted by 𝕜𝑛 , is given by
𝕜𝑛 ∶= {𝑐1 𝑒1 + 𝑐2 𝑒2 + ⋯ + 𝑐𝑛 𝑒𝑛 ∶ 𝑐𝑖 ∈ {0, 1}}. The elements 𝑒1 , … , 𝑒𝑛 ∈
𝕜𝑛 are called basis elements and 𝑛 is called the dimension of 𝕜𝑛 . It
follows by definition that any particular element 𝑥 ∈ 𝕜𝑛 can be written
as a linear combination of basis elements; that is, 𝑥 = 𝑐1 𝑒1 + 𝑐2 𝑒2 +
⋯ + 𝑐𝑛 𝑒𝑛 with 𝑐𝑖 ∈ {0, 1}.
Problem 3.3. If |𝑋| = 𝑛, how many elements are in the vector space 𝕜𝑛 ?
Prove it.
In addition to understanding vector spaces, we need to understand
functions between them. A function 𝐴 ∶ 𝕜𝑛 → 𝕜𝑚 is an 𝔽2 -linear trans-
formation, or just linear, if for every pair of elements 𝑣, 𝑣′ ∈ 𝕜𝑛 , 𝐴
satisfies 𝐴(𝑣 + 𝑣′ ) = 𝐴(𝑣) + 𝐴(𝑣′ ).4
Problem 3.4. Let 𝐴 ∶ 𝕜𝑛 → 𝕜𝑚 be a linear transformation as defined
above. Prove that 𝐴(0)⃗ = 0.⃗
4
In general, there is also the condition that scalars must pull through, but since we are working
over 𝔽2 , this condition is automatic.
3.1. Linear algebra 85
Linear transformations can be represented by a matrix. In particu-
lar, we will be interested in the set of all elements 𝑥 such that 𝐴(𝑥) = 0,⃗
called the kernel and denoted by ker(𝐴). That is,
⃗
ker(𝐴) ∶= {𝑥 ∈ 𝕜𝑛 ∶ 𝐴(𝑣) = 0}.
This turns out to be a vector space, and we call its dimension the nullity
of 𝐴, denoted by null(𝐴). Now this is a helpful concept because when we
generated 𝕜4 above, we wanted to pick out those elements representing
holes. For 𝕜4 , there was only one hole. This number coincides precisely
with the nullity of a certain linear transformation. In general, the nullity
will count all the potential holes in a given dimension. How is this num-
ber computed? It is found by studying a certain matrix. We will compute
it in practice by using the rank-nullity theorem (Theorem 3.5) from lin-
ear algebra. Recall that the range or image of a linear transformation
𝐴 ∶ 𝕜𝑛 → 𝕜𝑚 is
Im(𝐴) ∶= {𝑦 ∈ 𝕜𝑚 ∶ ∃𝑥 ∈ 𝕜𝑛 such that 𝐴(𝑥) = 𝑦}.
It turns out that Im(𝐴) is also a vector space, and its dimension is called
the rank of 𝐴, denoted by rank(𝐴).
Theorem 3.5 (The rank-nullity theorem). Let 𝐴 ∶ 𝕜𝑛 → 𝕜𝑚 be a lin-
ear transformation so that 𝐴 can be viewed as an 𝑚 × 𝑛 matrix. Then
rank(𝐴) + null(𝐴) = 𝑛.
In order to utilize Theorem 3.5, we need to know how to compute
the rank of a matrix. This is easily done if the matrix is in a special form
called row echelon form. A leading coefficient or pivot of a non-zero
row in a matrix refers to the leftmost non-zero entry. Row echelon
form is characterized by two conditions: The first is that all non-zero
rows are above any row of zeros. Second, the leading coefficient of a
non-zero row is always strictly to the right of the leading coefficient of
the row above it.
To transform a matrix into row echelon form, we may perform any of
the following elementary row operations: we may replace one row by
the sum of itself and another row, or we may interchange two rows. Once
a matrix is in row echelon form, the rank is computed using Theorem 3.6.
86 3. Simplicial homology
Theorem 3.6. Performing elementary row operations on a matrix 𝐴
does not change the rank of 𝐴. If 𝐴 is in row echelon form, then the
rank of 𝐴 is precisely the number of non-zero rows.
A proof of the rank-nullity theorem may be found in [107, p. 233],
while a proof of Theorem 3.6 may be found in [107, p. 231].
1 0 1 1
Example 3.7. Let 𝐴 = ( 1 0 0 1 ) . Add the second row to the
1 0 0 1
1 0 1 1
third to obtain 𝐵 = ( 1 0 0 1 ). Now we can’t have two 1s in the
0 0 0 0
first column (as it violates the second condition), so we add the first
1 0 1 1
row to the second, yielding 𝐶 = ( 0 0 1 0 ). By Theorem 3.6,
0 0 0 0
rank(𝐴) = 2. Since 𝐴 is 3 × 4, by Theorem 3.5, null(𝐴) = 1.
1 0 0 0 1 0
⎛ ⎞
0 1 0 1 0 0
Problem 3.8. Let 𝐴 = ⎜ ⎟. Find rank(𝐴) and
⎜ 1 1 1 1 1 0 ⎟
⎝ 0 0 0 0 1 1 ⎠
null(𝐴).
3.2. Betti numbers
We continue with the intuition behind a theory of holes or homology.
Consider the simplicial complex
𝑎 𝑏
𝑑 𝑐
3.2. Betti numbers 87
Now using the idea at the beginning of Section 3, we want to view
a hole as a formal sum of elements in a vector space. But a possible
problem arises. It looks like we would be counting three holes, namely,
𝑎𝑏+𝑏𝑐+𝑐𝑎, 𝑎𝑐+𝑐𝑑 +𝑎𝑑, and 𝑎𝑏+𝑏𝑐+𝑐𝑑 +𝑎𝑑. Actually, one of these is a
combination of the other two, as (𝑎𝑏+𝑏𝑐+𝑐𝑎)+(𝑎𝑐+𝑐𝑑+𝑎𝑑) = 𝑎𝑏+𝑏𝑐+
𝑐𝑑 + 𝑎𝑑. But that still leaves 𝑎𝑏 + 𝑏𝑐 + 𝑐𝑎 as a “fake” hole, since it is filled
in. How do we know it is filled in? Precisely because there is a 2-simplex
𝑎𝑏𝑐 whose boundary is the fake hole; that is, 𝜕(𝑎𝑏𝑐) = {𝑎𝑏, 𝑏𝑐, 𝑎𝑐} (recall
Definition 1.6). Viewing 𝑎𝑏𝑐 as a vector in a vector space (as opposed
to a simplex), we could say that 𝜕(𝑎𝑏𝑐) = 𝑎𝑏 + 𝑏𝑐 + 𝑎𝑐. So it looks like
what we are doing here is counting the number of potential holes and
then subtracting from that the number of holes that get filled in. In this
case, the number of holes is 2 − 1 = 1.
Let 𝐾 be a simplicial complex on [𝑣𝑛 ]. Recall that we used 𝑐𝑖 to de-
note the number of 𝑖-simplices of 𝐾. Now define 𝐾𝑖 to be the set of 𝑖-
simplices of 𝐾. It then follows by definition that |𝐾𝑖 | = 𝑐𝑖 .
Exercise 3.9. What are 𝐾0 and 𝐾𝑗 for 𝑗 > 𝑛? Write down the values |𝐾0 |
and |𝐾𝑗 |.
Let 𝜎 ∈ 𝐾𝑖 . Then each 𝜎 ∈ 𝐾𝑖 is a basis element in the vector space
𝕜𝑐𝑖 generated by all the elements of 𝐾𝑖 .
Example 3.10. Recall the simplicial complex 𝐾 from Example 2.88. We
give the simplices names as follows:
𝑣5 𝑣6 𝑣0
𝑣1 𝑣2 𝑣3 𝑣4
88 3. Simplicial homology
Then
𝐾 = {𝑣0 , 𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 , 𝑣5 , 𝑣6 , 𝑣0 𝑣3 , 𝑣0 𝑣4 , 𝑣0 𝑣6 , 𝑣1 𝑣2 , 𝑣2 𝑣3 , 𝑣3 𝑣4 , 𝑣1 𝑣5 ,
𝑣2 𝑣5 , 𝑣2 𝑣6 , 𝑣3 𝑣6 , , 𝑣0 𝑣3 𝑣4 }.
We have
𝐾0 = {𝑣0 , 𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 , 𝑣5 , 𝑣6 },
𝐾1 = {𝑣0 𝑣3 , 𝑣0 𝑣4 , 𝑣0 𝑣6 , 𝑣1 𝑣2 , 𝑣2 𝑣3 , 𝑣3 𝑣4 , 𝑣1 𝑣5 , 𝑣2 𝑣5 ,
𝑣2 𝑣6 , 𝑣3 𝑣6 },
𝐾2 = {𝑣0 𝑣3 𝑣4 },
⋯ = 𝐾4 = 𝐾3 = ∅,
and so 𝑐0 = 7, 𝑐1 = 10, and 𝑐2 = 1. These in turn generate vector spaces
𝕜7 , 𝕜10 , and 𝕜1 , respectively.
Exercise 3.11. Show that if 𝐾 is an 𝑛-dimensional simplicial complex,
then the sets 𝐾0 , 𝐾1 , … , 𝐾𝑛 form a partition of 𝐾.
In other words, we partitioned 𝐾 into classes where two elements
of 𝐾 are in the same class if and only if they have the same size. Then
we created a vector space for each of those classes, and the dimension
of the vector space depended on the size of the class. Any element in
a vector space generated by a collection of simplices is called a chain.
Now things get a little more tricky. For every 0 ≤ 𝑖 < ∞, we wish to
construct linear transformations 𝜕𝑖 ∶ 𝕜𝑐𝑖 → 𝕜𝑐𝑖−1 . To each simplex this
linear transformation associates its boundary. Once we define 𝜕𝑖 , this
will give us the following chain complex:
𝜕𝑖+1 𝜕𝑖 𝜕𝑖−1 𝜕2 𝜕1 𝜕0
⋯ / 𝕜𝑐𝑖 / 𝕜𝑐𝑖−1 /⋯ / 𝕜𝑐1 / 𝕜𝑐 0 / 0.
A chain complex is a sequence of vector spaces, along with linear trans-
formations between them, with the property that 𝜕𝑖−1 ∘ 𝜕𝑖 = 0. We often
3.2. Betti numbers 89
denote a chain complex by (𝕜∗ , 𝜕∗ ) for short. Now you know immedi-
ately from your work in Problem 3.9 that 𝕜𝑐𝑗 = {0}⃗ for all 𝑗 ≥ 𝑛. We
will simply write 0 = {0}⃗ when there is no possibility of confusion. Now
we define the boundary operator (again, because the boundary of the
simplex is what is being computed) 𝜕𝑖 ∶ 𝕜𝑐𝑖 → 𝕜𝑐𝑖−1 .
Definition 3.12. Let 𝜎 ∈ 𝐾𝑚 and write 𝜎 = 𝜎𝑖0 𝜎𝑖1 ⋯ 𝜎𝑖𝑚 . For 𝑚 = 0,
define 𝜕0 ∶ 𝕜𝑐0 → 0 by 𝜕0 = 0, the matrix of appropriate size consisting
of all zeros. For 𝑚 ≥ 1, define the boundary operator 𝜕𝑚 ∶ 𝕜𝑐𝑚 →
𝕜𝑐𝑚−1 by 𝜕𝑚 (𝜎) ∶= ∑ (𝜎 − {𝜎𝑖𝑗 }) = ∑ 𝜎𝑖0 𝜎𝑖1 ⋯ 𝜎𝑖̂ 𝑗 ⋯ 𝜎𝑖𝑚 where 𝜎𝑖̂ 𝑗
0≤𝑗≤𝑚 0≤𝑗≤𝑚
excludes the value 𝜎𝑖𝑗 .
Remark 3.13. Note the relationship between the boundary defined in
Definition 3.12 and that defined in Definition 1.6. Given a simplex 𝜎,
both definitions take into account the codimension-1 faces of 𝜎. The
difference is that the former definition produces a chain while the latter
definition produces a set.
Example 3.14. We continue with Example 3.10. Since ∅ = 𝐾3 = 𝐾4 =
⋯, we have 0 = 𝕜𝑐3 = 𝕜𝑐4 = ⋯. Hence 𝜕𝑖 = 0 for 𝑖 = 3, 4, …. We then
need to compute only 𝜕2 and 𝜕1 . Now 𝜕2 ∶ 𝕜1 → 𝕜10 and, by the rule
given above, 𝜕2 (𝑣0 𝑣3 𝑣4 ) = ∑ (𝑣0 𝑣3 𝑣4 − 𝑣𝑗 ) = 𝑣3 𝑣4 + 𝑣0 𝑣4 + 𝑣0 𝑣3 . The
𝑗=0,3,4
matrix corresponding to this is
𝑣0 𝑣3 𝑣4
𝑣0 𝑣3 1
⎛ ⎞
𝑣0 𝑣4 1
⎜ ⎟
𝑣0 𝑣6 ⎜ 0 ⎟
𝑣1 𝑣2 ⎜ 0 ⎟
𝑣𝑣 ⎜ 0 ⎟
𝜕2 = 2 3 ⎜ 1 ⎟.
𝑣3 𝑣4
𝑣1 𝑣5 ⎜ 0 ⎟
⎜ ⎟
𝑣2 𝑣5 ⎜ 0 ⎟
𝑣2 𝑣6 ⎜ 0 ⎟
𝑣3 𝑣6 ⎝ 0 ⎠
90 3. Simplicial homology
Next we compute 𝜕1 ∶ 𝕜10 → 𝕜7 :
𝜕1 (𝑣0 𝑣3 ) = 𝑣3 + 𝑣0 ,
𝜕1 (𝑣0 𝑣4 ) = 𝑣4 + 𝑣0 ,
𝜕1 (𝑣0 𝑣6 ) = 𝑣6 + 𝑣0 ,
𝜕1 (𝑣1 𝑣2 ) = 𝑣2 + 𝑣1 ,
𝜕1 (𝑣2 𝑣3 ) = 𝑣3 + 𝑣2 ,
𝜕1 (𝑣3 𝑣4 ) = 𝑣4 + 𝑣3 ,
𝜕1 (𝑣1 𝑣5 ) = 𝑣5 + 𝑣1 ,
𝜕1 (𝑣2 𝑣5 ) = 𝑣5 + 𝑣2 ,
𝜕1 (𝑣2 𝑣6 ) = 𝑣6 + 𝑣2 ,
𝜕1 (𝑣3 𝑣6 ) = 𝑣6 + 𝑣3 .
This yields the matrix 𝜕1 given by
𝑣0 𝑣3 𝑣0 𝑣4 𝑣0 𝑣6 𝑣1 𝑣2 𝑣2 𝑣3 𝑣3 𝑣4 𝑣1 𝑣5 𝑣2 𝑣5 𝑣2 𝑣6 𝑣3 𝑣6
𝑣0 1 1 1 0 0 0 0 0 0 0
⎛ ⎞
𝑣1 0 0 0 1 0 0 1 0 0 0
⎜ ⎟
𝑣2 ⎜ 0 0 0 1 1 0 0 1 1 0 ⎟
𝑣3 ⎜ 1 0 0 0 1 1 0 0 0 1 ⎟.
𝑣4 ⎜ 0 1 0 0 0 1 0 0 0 0 ⎟
⎜ ⎟
𝑣5 ⎜ 0 0 0 0 0 0 1 1 0 0 ⎟
𝑣6 ⎝ 0 0 1 0 0 0 0 0 1 1 ⎠
Let’s practice using the information given in the matrix to compute
𝜕𝜕(𝑣2 𝑣5 ) and 𝜕𝜕(𝑣0 𝑣3 𝑣4 ). We have
𝜕𝜕(𝑣2 𝑣5 ) = 𝜕(𝑣5 + 𝑣2 )
= 𝜕(𝑣5 ) + 𝜕(𝑣2 )
= 0⃗
3.2. Betti numbers 91
and
𝜕𝜕(𝑣0 𝑣3 𝑣4 ) = 𝜕(𝑣3 𝑣4 + 𝑣0 𝑣4 + 𝑣0 𝑣3 )
= 𝜕(𝑣3 𝑣4 ) + 𝜕(𝑣0 𝑣4 ) + 𝜕(𝑣0 𝑣3 )
= 𝑣3 + 𝑣4 + 𝑣0 + 𝑣4 + 𝑣0 + 𝑣3
= 2𝑣0 + 2𝑣4 + 2𝑣3
= 0.⃗
In both cases, we obtain 0.⃗
Proposition 3.15. With 𝜕𝑚 defined as above, 𝜕𝑚−1 𝜕𝑚 = 0 where 0 is
the zero matrix.
Proof. Since there is no confusion, we drop the subscripts on 𝜕. Fur-
thermore, we compute 𝜕𝜕 on a single generator, and the general result
follows by linearity. We have
𝜕𝜕(𝜎) = ∑ ∑ 𝜎0 𝜎1 ⋯ 𝜎𝑗̂ ⋯ 𝜎𝑚
𝑖 𝑗
= ∑ 𝜎0 𝜎1 ⋯ 𝜎𝑗̂ ⋯ 𝜎𝑖̂ ⋯ 𝜎𝑚
𝑖≠𝑗
= 0,⃗
where the last equality is justified since for fixed values of 𝑖 and 𝑗, the
value 𝜎0 𝜎1 ⋯ 𝜎𝑗̂ ⋯ 𝜎𝑚 appears twice in the sum and hence the two oc-
currences added together give 0.⃗ □
We have thus defined a chain complex from a simplicial complex
𝐾. Now that we are comfortable with this new notation, we can define
precisely what we mean by holes.
92 3. Simplicial homology
Definition 3.16. We define the 𝑖th (unreduced) 𝔽2 -homology of 𝐾 to
be the vector space
𝐻𝑖 (𝐾; 𝔽2 ) ∶= 𝕜null 𝜕𝑖 −rank 𝜕𝑖+1 .
The 𝑖th 𝔽2 -Betti number of 𝐾 is defined to be 𝑏𝑖 (𝐾; 𝔽2 ) = null 𝜕𝑖 −
rank 𝜕𝑖+1 . We usually call it the Betti number for short and use the nota-
tion 𝐻𝑖 (𝐾) and 𝑏𝑖 (𝐾) or even 𝐻𝑖 and 𝑏𝑖 when 𝐾 is clear from the context.
Remark 3.17. For now we will mostly use the Betti numbers without
recourse to the homology vector spaces 𝐻𝑖 (𝐾). However, in Section 8.2,
we will revisit the homology vector spaces themselves and see that they
can be viewed from a different perspective.
Remark 3.18. Another technical point: 𝔽2 -Betti numbers do not always
coincide with “Betti numbers” that you might encounter in other books
on topology. We choose to work with 𝔽2 -Betti numbers in order to make
computations easier. The tradeoff is that we may lose some informa-
tion. In particular, our 𝔽2 -Betti numbers will differ somewhat from the
“standard” Betti numbers of the Klein bottle in Example 8.34.
Example 3.19. Now using the techniques from Section 3.1, we see that
rank(𝜕2 ) = 1, null(𝜕2 ) = 0, rank(𝜕1 ) = 6, null(𝜕1 ) = 4, rank(𝜕0 ) = 0, and
null(𝜕0 ) = 7. Hence we obtain 𝐻2 (𝐾) = 𝕜0 , 𝐻1 (𝐾) = 𝕜4−1 = 𝕜3 , and
𝐻0 (𝐾) = 𝕜7−6 = 𝕜1 . Thus 𝑏0 (𝐾) = 1, 𝑏1 (𝐾) = 3, and 𝑏𝑖 (𝐾) = 0 for
all 𝑖 ≥ 2. Looking at the picture of 𝐾 in Example 3.10, we can identify
the three holes corresponding to 𝑏1 (𝐾) = 3 and furthermore see that
𝑏𝑖 (𝐾) = 0 for 𝑖 > 1. But what does 𝑏0 (𝐾) = 1 mean? The value 𝑏0 is
counting the number of vertices that are not connected by a path; i.e., it
is counting the number of components or whole “pieces” of 𝐾. Since 𝐾
is connected, it makes sense that 𝑏0 (𝐾) = 1.
The definitions here, especially of the boundary operators, are quite
technical and tricky, but practicing with several examples should help to
clarify.
3.2. Betti numbers 93
Example 3.20. Let 𝐾 = {𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 , 𝑣5 , 𝑣6 , 𝑣7 , 𝑣12 , 𝑣13 , 𝑣24 , 𝑣34 , 𝑣25 , 𝑣45 ,
𝑣56 , 𝑣245 }. Then 𝐾 is given by
𝑣1 𝑣12 𝑣2
𝑣25
𝑣245 𝑣56
𝑣13 𝑣24 𝑣6 𝑣7
𝑣5
𝑣45
𝑣34
𝑣3 𝑣4
We follow the same method as above. Before we begin, look at the pic-
ture of 𝐾 and see if you can predict what the Betti numbers will be. First
we list each 𝐾𝑖 :
𝐾2 = {𝑣245 },
𝐾1 = {𝑣12 , 𝑣13 , 𝑣24 , 𝑣34 , 𝑣25 , 𝑣45 , 𝑣56 },
𝐾0 = {𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 , 𝑣5 , 𝑣6 , 𝑣7 }.
This induces the chain complex
𝜕2 𝜕1 𝜕0 =0
0
0 / 𝕜1 / 𝕜7 / 𝕜7 / 0.
We need to compute the maps 𝜕2 and 𝜕1 . Now 𝜕2 ∶ 𝕜1 → 𝕜7 , so it is
realized by a 7 × 1 matrix. By definition of 𝜕2 , we see that
𝜕2 (𝑣245 ) = 𝑣45 + 𝑣25 + 𝑣24 . Thus
0
⎛ ⎞
0
⎜ ⎟
⎜ 1 ⎟
𝜕2 = ⎜ 0 ⎟.
⎜ 1 ⎟
⎜ ⎟
⎜ 1 ⎟
⎝ 0 ⎠
94 3. Simplicial homology
Now 𝜕1 ∶ 𝕜7 → 𝕜7 and hence is a 7 × 7 matrix. We have
𝜕1 (𝑣12 ) = 𝑣2 + 𝑣1 ,
𝜕1 (𝑣13 ) = 𝑣3 + 𝑣1 ,
𝜕1 (𝑣24 ) = 𝑣4 + 𝑣2 ,
𝜕1 (𝑣34 ) = 𝑣4 + 𝑣3 ,
𝜕1 (𝑣25 ) = 𝑣5 + 𝑣2 ,
𝜕1 (𝑣45 ) = 𝑣5 + 𝑣4 ,
𝜕1 (𝑣56 ) = 𝑣6 + 𝑣5 ,
and hence
1 1 0 0 0 0 0
⎛ ⎞
1 0 1 0 1 0 0
⎜ ⎟
⎜ 0 1 0 1 0 0 0 ⎟
𝜕1 = ⎜ 0 0 1 1 0 1 0 ⎟.
⎜ 0 0 0 0 1 1 1 ⎟
⎜ ⎟
⎜ 0 0 0 0 0 0 1 ⎟
⎝ 0 0 0 0 0 0 0 ⎠
Now 𝜕2 is a non-zero column vector, so that rank(𝜕2 ) = 1. By Theorem
3.5, null(𝜕2 ) = 1 − 1 = 0. Furthermore, 𝜕1 row-reduces to 5 non-zero
rows, so by Theorem 3.6, rank(𝜕1 ) = 5. Again using Theorem 3.5, we see
that null(𝜕1 ) = 7−5 = 2. Finally, null(𝜕0 ) = 7 and rank(𝜕0 ) = 0. Putting
these pieces together and using the definition of homology above, we see
that
𝐻2 (𝐾) = 𝕜0 = 0,
𝐻1 (𝐾) = 𝕜2−1 = 𝕜1 ,
𝐻0 (𝐾) = 𝕜7−5 = 𝕜2 .
The Betti numbers of 𝐾 are thus given by 𝑏2 (𝐾) = 0, 𝑏1 (𝐾) = 1, and
𝑏0 (𝐾) = 2. As we mentioned above, 𝑏0 counts the number of compo-
nents, and since the vertex 𝑣7 is not connected to anything else, 𝑏0 (𝐾) =
2 is appropriate.
3.3. Invariance under collapses 95
Problem 3.21. Let 𝐾 = 𝑆 2 . Compute the Betti numbers of 𝐾.
Problem 3.22. Let 𝜕𝑖 ∶ 𝑉𝑖 → 𝑉𝑖−1 , 𝑖 = 1, 2, … , be a collection of vector
spaces and linear transformations. Prove that Im(𝜕𝑖+1 ) ⊆ ker(𝜕𝑖 ) if and
only if 𝜕𝑖 ∘ 𝜕𝑖+1 = 0.
There is a very nice relationship between the Betti numbers and the
Euler characteristic.
Theorem 3.23. Let 𝐾 be a simplicial complex of dimension 𝑛. Then
𝑛
𝜒(𝐾) = ∑ (−1)𝑖 𝑏𝑖 .
𝑖=0
Proof. Observe that
𝑛
𝜒(𝐾) = ∑ (−1)𝑖 𝑐𝑖
𝑖=0
𝑛
= ∑ (−1)𝑖 [rank(𝜕𝑖 ) + null(𝜕𝑖 )]
𝑖=0
= rank(𝜕0 ) + null(𝜕0 ) − rank(𝜕1 ) − null(𝜕1 ) + ⋯
+ (−1)𝑛 rank(𝜕𝑛 ) + (−1)𝑛 null(𝜕𝑛 )
= 0 + 𝑏0 − 𝑏1 + ⋯ + (−1)𝑛−1 𝑏𝑛−1 + (−1)𝑛 𝑏𝑛
𝑛
= ∑ (−1)𝑛 𝑏𝑖 ,
𝑖=0
where the second equality is justified by the rank-nullity theorem. □
Problem 3.24. For all integers 𝑖 ≥ 0, compute 𝑏𝑖 (𝐷), where 𝐷 is the
dunce cap in Example 1.22.
3.3. Invariance under collapses
We saw in Proposition 1.42 that if two simplicial complexes have the
same simple homotopy type, then they have the same Euler character-
istic. The Betti numbers of a simplicial complex likewise do not change
under expansions and collapses. The main result of this section is the
following:
96 3. Simplicial homology
Proposition 3.25. Let 𝐾 be an 𝑛-dimensional simplicial complex. Sup-
pose 𝐾 ↘ 𝐾 ′ is an elementary collapse (so that 𝐾 ′ ↗ 𝐾). Then 𝑏𝑑 (𝐾) =
𝑏𝑑 (𝐾 ′ ) for all 𝑑 = 0, 1, 2, … .
Before proving Proposition 3.25, we need another concept from lin-
ear algebra as well as a lemma.
Definition 3.26. Let 𝑈 and 𝑈 ′ be subspaces of a vector space 𝑉 . The
sum of vector spaces is defined by 𝑈 + 𝑈 ′ ∶= {𝑢 + 𝑢′ ∶ 𝑢 ∈ 𝑈, 𝑢′ ∈ 𝑈 ′ }.
Furthermore, 𝑉 is said to be the direct sum of 𝑈 and 𝑈 ′ , written as 𝑉 =
⃗ If 𝑇 ∶ 𝑈 → 𝑉 and 𝑇 ′ ∶ 𝑈 ′ → 𝑉 ′ ,
𝑈 ⊕𝑈 ′ , if 𝑉 = 𝑈 +𝑈 ′ and 𝑈 ∩𝑈 ′ = {0}.
define 𝑇 ⊕ 𝑇 ∶ 𝑈 ⊕ 𝑈 → 𝑉 ⊕ 𝑉 by (𝑇 ⊕ 𝑇 ′ )(𝑢 + 𝑢′ ) ∶= 𝑇(𝑢) + 𝑇(𝑢′ ).
′ ′ ′
The following is another purely linear algebra fact. We omit the
proof.
Proposition 3.27. Let 𝑇 ∶ 𝑈 → 𝑉 and 𝑇 ′ ∶ 𝑈 ′ → 𝑉 ′ . Then null(𝑇 ⊕
𝑇 ′ ) = null(𝑇) + null(𝑇 ′ ) and rank(𝑇 ⊕ 𝑇 ′ ) = rank(𝑇) + rank(𝑇 ′ ).
Definition 3.28. Given a chain complex (𝕜∗ , 𝜕∗ ), another chain complex
(𝕜′∗ , 𝜕∗ ) is a subchain complex if for every 𝑛 ≥ 1, 𝕜′𝑛 is a vector subspace
of 𝕜𝑛 and 𝜕𝑛 (𝕜′𝑛 ) ⊆ 𝕜′𝑛−1 .
A chain complex (𝕜∗ , 𝜕∗ ) is said to split into chain complexes (𝕜′∗ , 𝜕∗ )
and (𝕜″∗ , 𝜕∗ ) if 𝕜𝑖 = 𝕜′𝑖 ⊕ 𝕜″𝑖 for all 𝑖.
Lemma 3.29. Suppose that the chain complex (𝕜∗ , 𝜕∗ ) splits as (𝕜′∗ , 𝜕∗ )
and (𝕜″∗ , 𝜕). Then 𝑏𝑖 (𝕜∗ ) = 𝑏𝑖 (𝕜′∗ ) + 𝑏𝑖 (𝕜∗″ )
Problem 3.30. Prove Lemma 3.29.
We are now ready to prove Proposition 3.25.
Proof of Proposition 3.25. Let 𝐾 be a simplicial complex and suppose
{𝜏(𝑑−1) , 𝜎(𝑑) } is a free pair of 𝐾. Write 𝐾 ↘ 𝐾 ′ ∶= 𝐾 − {𝜏, 𝜎}. Denote
by (𝕜∗ , 𝜕∗ ) and (𝕜′∗ , 𝜕∗ ) the chain complexes for 𝐾 and 𝐾 ′ , respectively.
Define a new chain complex (𝕜″ , 𝜕∗″ ) where 𝕜″𝑑 is the vector space gen-
erated by 𝜎, 𝕜″𝑑−1 is the vector space generated by 𝜕𝑑 (𝜎), and all other
vector spaces are the 0 vector space. Furthermore, 𝜕𝑑″ (𝜎) ∶= 𝜕𝑑 (𝜎),
and all other boundary operators are clearly 0. We claim that (𝕜∗ , 𝜕∗ ) =
(𝕜′∗ ⊕𝕜″∗ , 𝜕∗′ ⊕𝜕∗″ ). If so, the result follows by Lemma 3.29 since 𝑏𝑖 (𝕜″∗ ) = 0
3.3. Invariance under collapses 97
for all 𝑖. Now clearly 𝕜𝑖 = 𝕜′𝑖 ⊕ 𝕜″𝑖 for all 𝑖 ≠ 𝑑 − 1. It remains to
show that 𝕜𝑑−1 = 𝕜′𝑑−1 ⊕ 𝕜″𝑑−1 . Let 𝛼 ∈ 𝕜𝑑−1 be a basis element. If
𝛼 ∈ 𝕜′𝑑−1 , we are done. Otherwise, 𝛼 = 𝜏. Write 𝜏 = 𝜎0 ⋯ 𝜎𝑗̂ ⋯ 𝜎𝑑 .
Observe that 𝜏 = ( ∑𝑖≠𝑗 𝜎0 ⋯ 𝜎𝑗̂ ⋯ 𝜎𝑖̂ ⋯ 𝜎𝑑 ) + 𝜕𝑑 (𝜎) ∈ 𝕜′𝑑−1 + ⟨𝜕𝑑 (𝜎)⟩
where ⟨𝜕𝑑 (𝜎)⟩ is the vector space generated by the set 𝜕𝑑 (𝜎). Thus 𝕜𝑑−1 ⊆
𝕜′𝑑−1 + ⟨𝜕𝑑 (𝜎)⟩. Clearly we have inclusion the other way. Finally, since
𝜏 ∉ 𝕜′𝑑−1 , 𝕜′𝑑−1 ∩ ⟨𝜕𝑑 (𝜎)⟩ = {0}, and the result follows. □
Corollary 3.31. Let 𝐾 ∼ 𝐿. Then 𝑏𝑖 (𝐾) = 𝑏𝑖 (𝐿) for every integer 𝑖 ≥ 0.
Problem 3.32. Prove Corollary 3.31, and then prove that if 𝐾 is collapsi-
ble, then 𝑏𝑖 (𝐾) = 0 for all 𝑖 ≥ 1 and 𝑏0 (𝐾) = 1.
Problem 3.33. Compute the Betti numbers of the following simplicial
complex. [Hint: There is an easy way and a cumbersome way to do this.]
Problem 3.34. Complete the work you could not do in Problem 1.46.
That is, show that 𝑆 1 ≁ 𝑆 3 .
Remark 3.35. As can be seen from your work in Problem 3.34, Corollary
3.31 is an excellent “in-theory” way to distinguish between simplicial
complexes with different simple homotopy types. However, in principle
it can be quite cumbersome, even impossible, to perform the computa-
tions by hand. The more simplices a complex has, the more difficult the
computation of its Betti numbers will be. We will find a way to make
computations of Betti numbers easier for certain cases in Section 8.4,
and then we will carry out these computations in Section 8.5. We will
also give an algorithm to perform these computations in Section 9.2.
98 3. Simplicial homology
Before completing this chapter, we prove another result about Betti
numbers. The addition of a 𝑝-simplex will either increase 𝑏𝑝 by 1 or
decrease 𝑏𝑝−1 by 1 (but not both). All other Betti numbers will be unaf-
fected. This lemma will be utilized in Chapters 4 and 5.
Lemma 3.36. Let 𝐾 be a simplicial complex and 𝜎(𝑝) ∈ 𝐾 a 𝑝-dimen-
sional facet of 𝐾, where 𝑝 ≥ 1. If 𝐾 ′ ∶= 𝐾 − {𝜎} is a simplicial complex,
then one of the following holds:
(a) 𝑏𝑝 (𝐾) = 𝑏𝑝 (𝐾 ′ ) + 1 and 𝑏𝑝−1 (𝐾) = 𝑏𝑝−1 (𝐾 ′ );
(b) 𝑏𝑝−1 (𝐾) + 1 = 𝑏𝑝−1 (𝐾 ′ ) and 𝑏𝑝 (𝐾) = 𝑏𝑝 (𝐾 ′ ).
Furthermore, 𝑏𝑖 (𝐾) = 𝑏𝑖 (𝐾 ′ ) for all 𝑖 ≠ 𝑝, 𝑝 − 1.
Proof. Let (𝕜∗ , 𝜕∗ ) and (𝕜′∗ , 𝜕∗′ ) be the associated chain complexes for 𝐾
and 𝐾 ′ , respectively. Since 𝜎 is a facet, it follows that 𝜕𝑖 = 𝜕𝑖′ for all
𝑖 ≠ 𝑝. Hence 𝑏𝑖 (𝐾) = 𝑏𝑖 (𝐾 ′ ) for all 𝑖 other than possibly 𝑏𝑝 = null(𝜕𝑝 ) −
rank(𝜕𝑝+1 ) and 𝑏𝑝−1 = null(𝜕𝑝−1 ) − rank(𝜕𝑝 ). We consider the cases
where 𝜎 ∈ ker(𝜕𝑝 ) and 𝜎 ∉ ker(𝜕𝑝 ).
Suppose that 𝜎 ∈ ker(𝜕𝑝 ). Then Im(𝜕𝑝 ) = Im(𝜕𝑝′ ) and hence
rank(𝜕𝑝 ) = rank(𝜕𝑝′ ). Since 𝜎 ∉ 𝑘′∗ , clearly 𝜎 ∉ ker(𝜕𝑝′ ) so that null(𝜕𝑝 ) =
1 + null(𝜕𝑝′ ); hence
𝑏𝑝 (𝐾) = null(𝜕𝑝 ) − rank(𝜕𝑝+1 )
= 1 + null(𝜕𝑝′ ) − rank(𝜕𝑝+1 )
= 1 + 𝑏𝑝 (𝐾 ′ )
′
and 𝑏𝑝−1 (𝐾) = null(𝜕𝑝−1 )−rank(𝜕𝑝 ) = null(𝜕𝑝−1 )−rank(𝜕𝑝′ ) = 𝑏𝑝−1 (𝐾 ′ ).
Now suppose that 𝜎 ∉ ker(𝜕𝑝 ), so that 0 ≠ 𝜕𝑝 (𝜎) ∈ Im(𝜕𝑝 ). Then
ker(𝜕𝑝 ) = ker(𝜕𝑝′ ) so that 𝑏𝑝 (𝐾) = null(𝜕𝑝 ) − rank(𝜕𝑝 ) = null(𝜕𝑝′ ) −
rank(𝜕𝑝′ ) = 𝑏𝑝 (𝐾 ′ ). Furthermore, since 𝜕𝑝 (𝜎) is a nontrivial element
of Im(𝜕𝑝 ) and 𝜎 is a basis element, rank(𝜕𝑝 ) = rank(𝜕𝑝′ ) + 1. Hence
′
𝑏𝑝−1 (𝐾) = null(𝜕𝑝−1 )−rank(𝜕𝑝 ) = null(𝜕𝑝−1 )−rank(𝜕𝑝′ ) − 1 = 𝑏𝑝−1 (𝐾 ′ )−
1. □
Problem 3.37. Using the above lemma and your knowledge of 𝑏𝑖 (Δ𝑛+1 ),
compute 𝑏𝑖 (𝑆𝑛 ) for every 𝑖. Conclude that if 𝑛 ≠ 𝑚, then 𝑆 𝑛 and 𝑆 𝑚 do
not have the same simple homotopy type.
3.3. Invariance under collapses 99
Problem 3.38. Let 𝑛 ≥ 0 be an integer and let 𝜎(𝑛) ∈ 𝑆 𝑛 be any simplex
of dimension 𝑛. Compute 𝑏𝑖 (𝑆 𝑛 − {𝜎}) for all 𝑖 ≥ 0.
Chapter 4
Main theorems of
discrete Morse theory
With much of the technical linear algebra machinery now behind us, we
devote this chapter to two of the most utilized results in discrete Morse
theory. These are the discrete Morse inequalities (Theorems 4.1 and 4.4)
and the collapse theorem (Theorem 4.27). In addition to these two re-
sults, several other topics, such as perfect discrete Morse functions and
level subcomplexes, are discussed.
4.1. Discrete Morse inequalities
As we have hinted, there is a strong relationship between the Betti num-
bers of a simplicial complex and the number of critical simplices of any
discrete Morse function on that same complex. This relationship is ob-
served in the weak discrete Morse inequalities, which we are now able
to prove.
Theorem 4.1 (Weak discrete Morse inequalities). Let 𝑓 ∶ 𝐾 → ℝ be a
discrete Morse function with 𝑚𝑖 critical values in dimension 𝑖 for 𝑖 =
0, 1, 2, … , 𝑛 ∶= dim(𝐾). Then
(i) 𝑏𝑖 ≤ 𝑚𝑖 for all 𝑖 = 0, 1, … , 𝑛 and
101
102 4. Main theorems of discrete Morse theory
𝑛
(ii) ∑ (−1)𝑖 𝑚𝑖 = 𝜒(𝐾).
𝑖=0
Proof. We prove only the first part, as the second part is Problem 4.2.
Because it does not affect the values of 𝑚𝑖 , we may assume without loss
of generality that 𝑓 is excellent by Lemma 2.33. We proceed by (strong)
induction on ℓ, the number of simplices of 𝐾. For ℓ = 1, the only sim-
plicial complex with one simplex is 𝐾 = ∗. By Problem 3.32, 𝑏0 (𝐾) = 1
and 𝑏𝑖 (𝐾) = 0 for all 𝑖 ≥ 1. Furthermore, 𝑚0 ≥ 1 for any discrete Morse
function on 𝐾 by Problem 2.26. Thus the base case is shown.
Assume the inductive hypothesis that there is an ℓ ≥ 1 such that for
every simplicial complex with 1 ≤ 𝑗 ≤ ℓ simplices, any discrete Morse
function satisfies 𝑚𝑖 ≤ 𝑏𝑖 . Suppose 𝐾 is any simplicial complex with ℓ+1
simplices and 𝑓 ∶ 𝐾 → ℝ is an (excellent) discrete Morse function. Now
consider max{𝑓}. If this is a critical value with corresponding (unique)
critical 𝑝-simplex 𝜎, we may consider 𝐾 ′ ∶= 𝐾 − {𝜎} and the function
𝑓′ = 𝑓|𝐾 ′ ∶ 𝐾 ′ → ℝ. Clearly 𝑓′ is a discrete Morse function satisfying
𝑚𝑝 (𝐾 ′ ) + 1 = 𝑚𝑝 (𝐾). Furthermore, by Lemma 3.36, removal of this
critical simplex 𝜎 results in either 𝑏𝑝 (𝐾) = 𝑏𝑝 (𝐾 ′ ) + 1 or 𝑏𝑝−1 (𝐾) + 1 =
𝑏𝑝−1 (𝐾 ′ ), while 𝑏𝑖 (𝐾) = 𝑏𝑖 (𝐾 ′ ) for all other values. Supposing the for-
mer, since 𝐾 ′ has ℓ − 1 simplices, it satisfies 𝑏𝑖 (𝐾 ′ ) ≤ 𝑚𝑖 (𝐾 ′ ) by the
inductive hypothesis. We thus have
𝑏𝑝 − 1 = 𝑏𝑝 (𝐾 ′ ) ≤ 𝑚𝑝 (𝐾 ′ ) = 𝑚𝑝 (𝐾) − 1,
which is the desired result. The case where 𝑏𝑝−1 (𝐾) + 1 = 𝑏𝑝−1 (𝐾 ′ ) is
similar.
Otherwise, if 𝜎 is not critical, then 𝜎 is a regular simplex and hence
part of a free pair. Removal of the free pair is an elementary collapse, and
by Corollary 3.31 the resulting complex has the same Betti numbers, so
by the inductive hypothesis we have that 𝑏𝑖 (𝐾) = 𝑏𝑖 (𝐾 ′ ) ≤ 𝑚𝑖 (𝐾 ′ ) =
𝑚𝑖 (𝐾) for all 𝑖. □
Problem 4.2. Prove the second part of Theorem 4.1.
4.1.1. Strong discrete Morse inequalities (optional). As the name
“weak” discrete Morse inequalities implies, there are also the strong dis-
crete Morse inequalities. Unfortunately, in order to prove them, we need
4.1. Discrete Morse inequalities 103
theorems from homotopy theory which use techniques beyond the scope
of this work.
We will state the needed results in detail using all the technical terms
in Theorem 4.3 and then discuss how we will use it. The interested
reader may find a proof of (i) in [65, Corollary 3.5], of (ii) in [137, Corol-
lary 4.24], and of (iii) in [116, pp. 28–30].
Theorem 4.3. Let 𝐾 be an 𝑛-dimensional simplicial complex with 𝑐𝑖
simplices of dimension 𝑖, and let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function.
Then
(i) 𝐾 is homotopy equivalent to a CW complex 𝑋 where the 𝑝-cells
of 𝑋 are in bijective correspondence with the set of critical 𝑝-
simplices of 𝑓;
(ii) 𝑏𝑖 (𝑋) = 𝑏𝑖 (𝐾) for all 𝑖 = 0, 1, … ;
(iii) for each 𝑝 = 0, 1, 2, … , 𝑛, 𝑛 + 1, … we have
𝑏𝑝 − 𝑏𝑝−1 + 𝑏𝑝−2 − ⋯ + (−1)𝑝 𝑏0 ≤ 𝑐𝑝 − 𝑐𝑝−1 + 𝑐𝑑−2 − ⋯ + (−1)𝑝 𝑐0 .
Recall that 𝑐𝑖 is the number of 𝑖-simplices of 𝐾, while 𝑚𝑖 is the num-
ber of critical 𝑖-simplices of 𝑓. Theorem 4.3 says that we may replace 𝐾
with a different structure (called a CW complex) in such a way that the
Betti numbers do not change. See [84, 113] for the basics of CW com-
plexes. The upshot of this theorem is that we may assume that given a
discrete Morse function on 𝐾, we have 𝑐𝑖 = 𝑚𝑖 for all 𝑖. Of course, this
is in general impossible if we restrict ourselves to simplicial complexes.
But because of the theorem, we may assume that 𝑐𝑖 = 𝑚𝑖 and that this
does not affect the Betti numbers.
Theorem 4.4 (Strong discrete Morse inequalities). Let 𝑓 ∶ 𝐾 → ℝ be a
discrete Morse function. For each 𝑝 = 0, 1, … , 𝑛, 𝑛 + 1, we have
𝑏𝑝 − 𝑏𝑝−1 + ⋯ + (−1)𝑝 𝑏0 ≤ 𝑚𝑝 − 𝑚𝑝−1 + ⋯ + (−1)𝑝 𝑚0 .
Proof. By Theorem 4.3, there is a CW complex 𝑋 with 𝑝-cells in bijective
correspondence to critical 𝑝-simplices of 𝑓. By the same theorem, we
104 4. Main theorems of discrete Morse theory
have
𝑏𝑝 (𝐾) − 𝑏𝑝−1 (𝐾) + ⋯ + (−1)𝑝 𝑏0 (𝐾)
= 𝑏𝑝 (𝑋) − 𝑏𝑝−1 (𝑋) + ⋯ + (−1)𝑝 𝑏0 (𝑋)
≤ 𝑐𝑝 − 𝑐𝑝−1 + 𝑐𝑝−2 − ⋯ + (−1)𝑝 𝑐0
= 𝑚𝑝 − 𝑚𝑝−1 + 𝑚𝑑−2 − ⋯ + (−1)𝑝 𝑚0 . □
Problem 4.5. Use the strong discrete Morse inequalities (Theorem 4.4)
to prove the weak discrete Morse inequalities (Theorem 4.1).
4.1.2. Perfect discrete Morse functions. Given Theorem 4.1, it is rea-
sonable to ask if and when equality is ever obtained. Let 𝐾 be an 𝑛-
dimensional simplicial complex. Recall that if 𝑓 ∶ 𝐾 → ℝ is a discrete
Morse function with 𝑚𝑖 critical simplices of dimension 𝑖, then the dis-
crete Morse vector of 𝑓 is defined by 𝑓 ⃗ = (𝑚0 , 𝑚1 , … , 𝑚𝑛 ).
Definition 4.6. A discrete Morse vector is called perfect if 𝑓 ⃗ =
(𝑏0 , 𝑏1 , … , 𝑏𝑛 ).
Problem 4.7. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function with 𝑓 ⃗ a
perfect discrete Morse vector. Show that
(i) 𝑓 ⃗ is unique;
(ii) 𝑓 ⃗ is optimal.
Example 4.8. Recall that in Example 2.88, we defined the discrete Morse
function 𝑔 ∶ 𝐾 → ℝ by
12 0 2
1
8 7
11 13 9 3
6
10 9 8 7 6 5 4
4.1. Discrete Morse inequalities 105
which had discrete Morse vector 𝑔⃗ = (1, 3, 0). If we compute the Betti
numbers of this complex using homology, we see that 𝑏0 = 1, 𝑏1 = 3,
and 𝑏2 = 0. Hence 𝑔⃗ is not only optimal but also unique by Problem 4.7.
Problem 4.9. Prove that there exists a perfect discrete Morse function
on Δ𝑛 and 𝑆 𝑛 for all 𝑛.
Does every simplicial complex admit a perfect discrete Morse func-
tion? The following result was shown by Ayala et al. [13].
Proposition 4.10. Let 𝐾 be a simplicial complex with 𝑏0 (𝐾) = 1 and
𝑏𝑖 (𝐾) = 0 for all 𝑖 > 0. If 𝐾 is not collapsible, then 𝐾 does not admit a
perfect discrete Morse function.
We postpone the proof of this result until Section 4.2.1. It will then
follow as an easy corollary.
Example 4.11. Let 𝐷 be the dunce cap. In Problem 3.24 you showed
that 𝑏0 (𝐷) = 1 and 𝑏𝑖 (𝐷) = 0 for all 𝑖 > 0. In addition, 𝐷 is clearly not
collapsible, as it has no free faces. By Proposition 4.10, 𝐷 does not admit
a perfect discrete Morse function.
In another paper [14], the same authors found examples of higher-
dimensional simplicial complexes which do not admit perfect discrete
Morse functions. Although Example 4.11 demonstrates the existence
of a simplicial complex which does not admit a perfect discrete Morse
function, many classes of simplicial complexes do admit perfect discrete
Morse functions. The following problems ask you to show this is some
specific cases.
Problem 4.12. Let 𝐾 be the simplicial complex consisting of 𝑛 isolated
points, i.e., 𝐾 ∶= {𝑣0 , 𝑣1 , … , 𝑣𝑛−1 }.
(i) Compute the Betti numbers of 𝐾.
(ii) Prove that every discrete Morse function on 𝐾 is perfect.
Problem 4.13. Let 𝐺 be a 1-dimensional simplicial complex with 𝑏0 (𝐺)
= 1.
(i) Prove that there are 𝑏1 edges that may be removed from 𝐺 so
that the resulting graph is a tree.
106 4. Main theorems of discrete Morse theory
(ii) Prove that there exists a perfect discrete Morse function on the
resulting tree.
(iii) Now prove that there exists a perfect discrete Morse function
on 𝐺.
In addition to these examples, Adiprasito and Benedetti [3] have
shown that a certain class of 3-dimensional simplicial complexes admits
perfect discrete Morse functions.
4.1.3. Towards optimal perfection. This section has raised the ques-
tion of how we can improve a given discrete Morse function to obtain a
better one, that is, one with fewer critical simplices and hence one step
closer to optimality or perfection (if the latter exists). One method is that
of canceling critical simplices. This method allows us to “extend” a
given gradient vector field to a larger one. In order to do this, we will
show how to “deform” one discrete Morse function into another. The
ideas in this section are attributable to Forman [65] and presented in
the fashion of [150, III.4]. We will use the method of canceling critical
simplices in an algorithm in Section 9.1.
Definition 4.14. A discrete Morse function 𝑓 is called flat if whenever
(𝜎, 𝜏) is a regular pair of 𝑓, we have 𝑓(𝜎) = 𝑓(𝜏).
Exercise 4.15. Show that every basic discrete Morse function is a flat
discrete Morse function. Give an example of a flat discrete Morse func-
tion which is not basic.
The following proposition tells us that we may transform any dis-
crete Morse function into a flat one, and the result will be Forman equiv-
alent to the original.
Proposition 4.16. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function and 𝑉𝑓
the induced gradient vector field. Then there exists a flat discrete Morse
function 𝑔 ∶ 𝐾 → ℝ such that 𝑓 and 𝑔 are Forman equivalent.
Proof. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function, and define a flat-
tening of 𝑓 by
𝑓(𝜏) if (𝜎, 𝜏) ∈ 𝑉𝑓 for some 𝜏,
𝑔(𝜎) ∶= {
𝑓(𝜎) otherwise.
4.1. Discrete Morse inequalities 107
Observe that for any 𝜎 ∈ 𝐾, 𝑔(𝜎) ≤ 𝑓(𝜎). Let 𝑐 be a critical value
of 𝑓. Then there is a 𝑝-simplex 𝜏 ∈ 𝐾 with 𝑓(𝜏) = 𝑐 such that for all
𝜎(𝑝−1) < 𝜏 < 𝜂(𝑝+1) we have that 𝑓(𝜎) < 𝑓(𝜏) < 𝑓(𝜂). Since 𝜏 is critical,
𝑔(𝜏) = 𝑓(𝜏) so that 𝑔(𝜎) ≤ 𝑓(𝜎) < 𝑓(𝜏) = 𝑔(𝜏). It remains to show
that 𝑔(𝜏) < 𝑔(𝜂). If 𝜂 is not the tail of a vector in 𝑉𝑓 , then 𝑔(𝜂) = 𝑓(𝜂),
hence the result. Thus, suppose that there is 𝛾(𝑝+2) > 𝜂 such that 𝑓(𝜂) ≥
𝑓(𝛾), and, by contradiction, suppose that 𝑓(𝜏) ≥ 𝑓(𝛾). But Problem 4.17
implies that 𝛾 is not critical, which is a contradiction. Clearly 𝑔 does
not have any critical simplices that 𝑓 does not, since the regular pairs of
𝑓 remain regular pairs of 𝑔. Because furthermore no new regular pairs
are introduced, 𝑉𝑓 = 𝑉𝑔 and so, by Theorem 2.53, 𝑓 and 𝑔 are Forman
equivalent. □
Problem 4.17. Let 𝐾 be a simplicial complex and suppose that there
exist simplices 𝜎(𝑝) < 𝜏(𝑝+2) such that 𝑓(𝜎) ≥ 𝑓(𝜏). Prove that both 𝜎
and 𝜏 are not critical.
We can now define a family of discrete Morse functions on 𝐾 that
smoothly deforms a given flat discrete Morse function into another one.
Generally, a deformation of one object into another is called a homo-
topy. We saw one version of “deformation” in Section 1.2, where we
deformed one simplicial complex into another through a series of col-
lapses and expansions. We started with one simplicial complex 𝐾 and,
through a series of intermediary simplicial complexes, obtained the sim-
plicial complex 𝐿. Now we want to take this idea and extend it to flat
discrete Morse functions; that is, given flat discrete Morse functions 𝑓
and 𝑔, we want to start with 𝑓 and perform a series of “deformations”
to obtain intermediary flat discrete Morse functions that ultimately end
with 𝑔.
Lemma 4.18. Let 𝑓, 𝑔 ∶ 𝐾 → ℝ be flat discrete Morse functions, and
define ℎ𝑡 (𝜎) ∶= (1 − 𝑡)𝑓(𝜎) + 𝑡𝑔(𝜎) for all 𝜎 ∈ 𝐾 and 𝑡 ∈ [0, 1]. Then
ℎ𝑡 is a discrete Morse function on 𝐾 for all 𝑡 ∈ [0, 1]. Furthermore, for
every 𝑡 ∈ (0, 1) we have that 𝑉ℎ𝑡 = 𝑉𝑓 ∩ 𝑉𝑔 . In particular, all 𝑉ℎ𝑡 are
Forman equivalent.
The function 𝑓𝑡 is a standard construction in homotopy theory
known as the straight-line homotopy.
108 4. Main theorems of discrete Morse theory
Proof. Assume without loss of generality that 𝑓, 𝑔 ∶ 𝐾 → ℝ>0 . That ℎ𝑡
is a discrete Morse function is Problem 4.19. Let 𝜎 < 𝜏. We use subset in-
clusion to show that 𝑉ℎ𝑡 = 𝑉𝑓 ∩ 𝑉𝑔 for all 𝑡 ∈ (0, 1). Let (𝜎, 𝜏) ∈ 𝑉ℎ𝑡 . Then
𝜎(𝑝) < 𝜏(𝑝+1) and ℎ𝑡 (𝜎) = ℎ𝑡 (𝜏) using the fact that ℎ𝑡 is flat. Since it is al-
ways the case that 𝜎(𝑝) < 𝜏(𝑝+1) , we only need to show that 𝑓(𝜎) = 𝑓(𝜏)
and 𝑔(𝜎) = 𝑔(𝜏). Now since 𝜎(𝑝) < 𝜏(𝑝+1) , 𝑓(𝜎) ≯ 𝑓(𝜏) and 𝑔(𝜎) ≯ 𝑔(𝜏)
by definition of 𝑓 and 𝑔 being flat. Hence 𝑓(𝜎) ≤ 𝑓(𝜏) and 𝑔(𝜎) ≤ 𝑔(𝜏).
Suppose by contradiction that at least one of 𝑓(𝜎) < 𝑓(𝜏) and 𝑔(𝜎) < 𝑔(𝜏)
holds. Then ℎ𝑡 (𝜎) = (1 − 𝑡)𝑓(𝜎) + 𝑡𝑔(𝜎) < (1 − 𝑡)𝑓(𝜏) + 𝑡𝑔(𝜏) = ℎ𝑡 (𝜏), a
contradiction. Thus 𝑓(𝜎) = 𝑓(𝜏) and 𝑔(𝜎) = 𝑔(𝜏), so that (𝜎, 𝜏) ∈ 𝑉𝑓 , 𝑉𝑔 .
Now we show that if (𝜎, 𝜏) ∈ 𝑉𝑓 ∩ 𝑉𝑔 , then (𝜎, 𝜏) ∈ 𝑉ℎ𝑡 . Since (𝜎, 𝜏) ∈
𝑉𝑓 ∩ 𝑉𝑔 and 𝑓 and 𝑔 are flat, 𝑓(𝜎) = 𝑓(𝜏) = 𝑔(𝜎) = 𝑔(𝜏); hence it easily
follows that ℎ𝑡 (𝜎) = (1 − 𝑡)𝑓(𝜎) + 𝑡𝑔(𝜎) = (1 − 𝑡)𝑓(𝜏) + 𝑡𝑔(𝜏) = ℎ𝑡 (𝜏).
We conclude that 𝑉ℎ𝑡 = 𝑉𝑓 ∩ 𝑉𝑔 and all 𝑉ℎ𝑡 are Forman equivalent. □
Problem 4.19. Prove that the function ℎ𝑡 defined in Lemma 4.18 is a
discrete Morse function.
Example 4.20. To illustrate Lemma 4.18, we will find a homotopy be-
tween two very different discrete Morse functions on the Möbius band
𝑀. The discrete Morse function 𝑓 along with its induced gradient vector
field is shown below.
0 6 5 1
7 6 5
8 10 12
1 8 9 10 11 12 1
9 11 4
2 3 4
1 2 3 0
𝑣3 𝑣0
𝑣0 𝑣3
4.1. Discrete Morse inequalities 109
Let 𝑔 along with its gradient vector field be as follows:
3 0 10 2
3 11 10
6 12 15
4 6 8 12 14 13 4
8 9 13
7 9 5
2 7 1 3
𝑣3 𝑣0
𝑣0 𝑣3
As in Lemma 4.18, the value of a simplex 𝜎 for any 𝑡 ∈ (0, 1) is given
by (1 − 𝑡)𝑓(𝜎) + 𝑡𝑔(𝜎). It can then be checked that when 𝑀 is given these
labels, the resulting gradient vector field is
𝑣3 𝑣0
𝑣0 𝑣3
which is precisely 𝑉𝑓 ∩ 𝑉𝑔 .
Recall that given a gradient vector field 𝑉 on 𝐾, a 𝑉-path 𝛾 is a
(𝑝) (𝑝+1) (𝑝) (𝑝+1) (𝑝) (𝑝+1) (𝑝)
sequence of simplices 𝛼0 , 𝛽0 , 𝛼1 , 𝛽1 , 𝛼2 , … , 𝛽𝑘 , 𝛼𝑘+1 in 𝑉
(𝑝) (𝑝+1) (𝑝+1) (𝑝) (𝑝)
such that (𝛼𝑖 , 𝛽𝑖 ) ∈ 𝑉 for 0 ≤ 𝑖 ≤ 𝑘 and 𝛽𝑖 > 𝛼𝑖+1 ≠ 𝛼𝑖 .
Furthermore, we may view a gradient vector field 𝑉 as being induced by
a discrete Morse function 𝑓 or abstractly as a discrete vector field with
no closed 𝑉-paths (Theorem 2.51).
110 4. Main theorems of discrete Morse theory
Example 4.21. Let us again consider the gradient vector field 𝑉𝑓 ∩ 𝑉𝑔
found in Example 4.20, with 𝜎 and 𝜏 labeled as follows:
𝑣1
𝑣3 𝑣0
𝜎
𝜏
𝑣0 𝑣3
Now both 𝜎 and 𝜏 are critical and there is furthermore a path
𝜏, 𝑣0 𝑣1 , 𝑣0 𝑣1 𝑣3 , 𝜎
between them. We can extend this path and simultaneously eliminate
these critical simplices by reversing the arrows of the path as follows:
𝑣1
𝑣3 𝑣0
𝜎
𝜏
𝑣0 𝑣3
In general, given the right set-up, it is always possible to reverse ar-
rows in a path, thereby turning two critical simplices into non-critical
simplices. The precise meaning of this statement is given in Proposition
4.22 below.
Proposition 4.22 (Canceling critical simplices). Let 𝑉 be a gradient vec-
tor field on 𝐾, and suppose that there are two critical simplices 𝜎(𝑝) =
𝜎 and 𝜏(𝑝+1) = 𝜏 with the property that there exists a unique 𝑉-path
(𝑝) (𝑝+1) (𝑝) (𝑝+1) (𝑝)
𝛾 ∶= [𝛾0 , 𝜏0 , 𝛾1 , … , 𝛾𝑛−1 , 𝜏𝑛−1 , 𝛾𝑛 = 𝜎] where 𝛾0 < 𝜏. Define 𝑉 ̄ to
satisfy the following three properties:
(a) 𝑉 ̄ − 𝛾 = 𝑉 − 𝛾;
(b) (𝛾0 , 𝜏) ∈ 𝑉;̄
(c) (𝛾𝑖+1 , 𝜏𝑖 ) ∈ 𝑉 ̄ for 𝑖 = 0, … , 𝑛 − 1.
4.2. The collapse theorem 111
Then 𝑉 ̄ is a gradient vector field. Moreover, there exists a unique
̄
𝑉-path from 𝜎 to 𝛾0 .
Proof. First, observe that by construction, the critical simplices of 𝑉 are
exactly the critical simplices of 𝑉 ̄ other than 𝜏 and 𝜎. It is clear that 𝑉 ̄
is a discrete vector field. By Theorem 2.51, it remains to show that 𝑉 ̄
does not contain any closed 𝑉-paths. By (a), 𝑉 ̄ and 𝑉 differ only on 𝛾 so
that 𝑉 ̄ cannot contain a closed 𝑉-path on 𝑉 ̄ − 𝛾 (otherwise it would also
be a closed 𝑉-path in 𝑉). Hence if 𝑉 ̄ does have a closed path, it must
contain a segment 𝛾𝑖 , 𝛿0 , … , 𝛿𝑟 , 𝛾𝑗 where 𝛿𝑘 ∉ 𝛾. Since (𝛾𝑖−1 , 𝜏𝑖−1 ) ∈ 𝑉
and (𝛾𝑖 , 𝜏𝑖−1 ) ∈ 𝑉,̄ we have that 𝛾0 , … , 𝛾𝑖−1 , 𝛿0 , … , 𝛿𝑟 , 𝛾𝑗 , … , 𝛾𝑛 is a 𝑉-path
from 𝜏 to 𝜎, contradicting the fact that 𝛾 is unique.
To see that the path 𝜎 = 𝛾𝑛 , 𝜏𝑛−1 , … , 𝛾0 is the unique 𝑉-path ̄ between
̄
𝜎 and 𝛾0 , suppose there is another such 𝑉-path. Then, as above, it must
contain a segment of the form 𝛾𝑖 , 𝜖0 , … , 𝜖ℓ , 𝛾𝑗 with 𝜖𝑘 ∉ 𝛾 and 𝑖 < 𝑗
̄
(otherwise 𝛾𝑗 , … , 𝜖0 , … , 𝜖ℓ , 𝛾𝑖 , … , 𝛾𝑗 would be a closed 𝑉-path). But now
𝜖0 , … , 𝜖ℓ , 𝛾𝑗 , 𝛾𝑗+1 , … , 𝛾𝑖−1 , 𝜖0 is a closed 𝑉-path, a contradiction. Hence the
̄
𝑉-path between 𝜎 and 𝛾0 is unique. □
The method of Proposition 4.22 is known as canceling critical sim-
plices, and it is much easier to understand than the proposition lets on.
The criterion for detecting when canceling is possible is the existence of
a unique 𝑉-path between critical simplices. Then you simply reverse the
directions of the arrows in the 𝑉-path, add one extra arrow, and voilá—
one fewer critical simplex! Remember that it is necessary that the 𝑉-path
be unique, as Problem 4.23 illustrates.
Problem 4.23. Why can we not cancel two critical simplices if there is
more than one 𝑉-path between them? Give an example.
4.2. The collapse theorem
Another foundational result with many applications is the collapse theo-
rem. At the beginning of Chapter 2, we saw how a collection of arrows on
a simplicial complex could be thought of as encoding a sequence of col-
lapses. Sometimes, after a sequence of collapses, we would get stuck and
have to “rip out” a simplex before we could continue collapsing. Given
a discrete Morse function, the collapse theorem tells us when we can
112 4. Main theorems of discrete Morse theory
perform these collapses and when we will “get stuck.” Before we state
and prove this theorem, we introduce level subcomplexes. In addition to
giving us a language to make the collapse theorem precise, level subcom-
plexes allow us to define a new notion of equivalence of discrete Morse
functions in Section 5.1.1.
4.2.1. Level subcomplexes. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse func-
tion. For any 𝑐 ∈ ℝ, the level subcomplex 𝐾(𝑐) is the subcomplex of 𝐾
consisting of all simplices 𝜏 with 𝑓(𝜏) ≤ 𝑐, as well as their faces; i.e.,
𝐾(𝑐) = 𝜎.
⋃ ⋃
𝑓(𝜏)≤𝑐 𝜍≤𝜏
Usually we are interested in studying the level subcomplexes in-
duced by the critical values of a discrete Morse function, as the following
example illustrates.
Example 4.24. Let 𝐾 be the simplicial complex with discrete Morse
function given below:
10 2
4
6 11 7
3
4 8
5
8
13
0 12
We are interested in the level subcomplexes induced by critical
values. The critical values in increasing order are easily seen to be
0, 2, 3, 5, 6, 7, 10, and 11. Each of these critical values induces a level sub-
complex. Think of them as building 𝐾 in stages. The level subcomplex
𝐾(0) is everything labeled 0 or less:
4.2. The collapse theorem 113
i.e., a single vertex. The level subcomplex 𝐾(2) is almost as uninterest-
ing, as it consists of only two isolated vertices:
Next is level subcomplex 𝐾(3):
Now for 𝐾(5):
On to 𝐾(6):
114 4. Main theorems of discrete Morse theory
The level subcomplex 𝐾(7) then completes a cycle:
We see a 2-simplex come in at 𝐾(10):
We finish with 𝐾(11):
The following lemma tells us that given a discrete Morse function
𝑓, we may perturb 𝑓 slightly to make it 1–1 without changing a couple
of specified level subcomplexes.
Lemma 4.25. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function and [𝑎, 𝑏] ⊆
ℝ an interval that contains no critical values. Then there is a discrete
Morse function 𝑓′ ∶ 𝐾 → ℝ satisfying the following properties:
(i) 𝑓′ is 1–1 on [𝑎, 𝑏];
(ii) 𝑓′ has no critical values in [𝑎, 𝑏];
4.2. The collapse theorem 115
(iii) 𝐾𝑓 (𝑏) = 𝐾𝑓′ (𝑏) and 𝐾𝑓 (𝑎) = 𝐾𝑓′ (𝑎);
(iv) 𝑓 = 𝑓′ outside of [𝑎, 𝑏].
Problem 4.26. Prove Lemma 4.25.
The next theorem tells us that there is “nothing interesting” (topo-
logically speaking) happening in between level subcomplexes. In other
words, we are justified in only considering the level subcomplexes in-
duced by the critical values (as opposed to regular values or values not
in the range of the discrete Morse function).
Theorem 4.27 (Collapse theorem). Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse
function and [𝑎, 𝑏] ⊆ ℝ an interval that contains no critical values. Then
𝐾(𝑏) ↘ 𝐾(𝑎).
Proof. Applying Lemma 4.25 and with an abuse of notation, we may
assume that 𝑓 is 1–1. If 𝑓(𝜎) ∉ [𝑎, 𝑏] for all 𝜎 ∈ 𝐾, then 𝐾(𝑎) = 𝐾(𝑏) and
we are done. Otherwise, since 𝑓 has discrete image and was assumed to
be 1–1, we may break [𝑎, 𝑏] up into subintervals such that each interval
contains exactly one regular value. Again, with an abuse of notation, we
will assume 𝜎 is the simplex such that 𝑓(𝜎) is the unique regular value
of 𝑓 in [𝑎, 𝑏]. By Lemma 2.24, exactly one of the following holds:
• there exists 𝜏(𝑝+1) > 𝜎 such that 𝑓(𝜏) ≤ 𝑓(𝜎);
• there exists 𝜈(𝑝−1) < 𝜎 such that 𝑓(𝜈) ≥ 𝑓(𝜎).
For the second case, suppose that there exists 𝜈(𝑝−1) < 𝜎 such that
𝑓(𝜈) ≥ 𝑓(𝜎). We claim that {𝜎, 𝜈} is a free pair in 𝐾(𝑏). Suppose to the
contrary that there exists a second coface 𝜎̃(𝑝) > 𝜈 with 𝜎̃ ∈ 𝐾(𝑏). Be-
cause 𝑓(𝜈) ≥ 𝑓(𝜎) and 𝑓 is a discrete Morse function, 𝑓(𝜈) < 𝑓(𝜎). ̃ By
definition of 𝜎̃ ∈ 𝐾(𝑏), we have that either 𝑓(𝜎)̃ ≤ 𝑏 or there exists 𝛼 > 𝜎̃
such that 𝑓(𝛼) ≤ 𝑏. If 𝑓(𝜎)̃ ≤ 𝑏, then 𝑎 ≤ 𝑓(𝜎) ≤ 𝑓(𝜈) < 𝑓(𝜎)̃ ≤ 𝑏.
Since 𝑓(𝜎)̃ cannot be a critical value by hypothesis, 𝑓(𝜎)̃ must be a reg-
ular value, contradicting the supposition that 𝑓(𝜎) is the only regular
value in [𝑎, 𝑏]. The same argument shows that such an 𝛼 would also
yield an additional regular value in [𝑎, 𝑏]. Thus {𝜎, 𝜈} is a free pair in
𝐾(𝑏), so 𝐾(𝑏) ↘ 𝐾(𝑏) − {𝜎, 𝜈} is an elementary collapse. Doing this over
the subintervals, we see that 𝐾(𝑏) ↘ 𝐾(𝑎). The first case is identical. □
116 4. Main theorems of discrete Morse theory
Problem 4.28. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function with exactly
one critical simplex. Prove that 𝐾 is collapsible. (Note that this is the
converse of Problem 2.34.)
We may now prove Proposition 4.10, which says that there exist sim-
plicial complexes that do not admit perfect discrete Morse functions.
Proof of Proposition 4.10. If 𝐾 admits a perfect discrete Morse vector
𝑓,⃗ then by definition 𝑓 ⃗ = (1, 0, 0, … , 0). By Problem 4.28, 𝐾 being not
collapsible implies that any discrete Morse vector on 𝐾 has at least two
critical values. Thus 𝐾 cannot admit a perfect discrete Morse vector. □
We may also generalize Theorem 4.27 to the case where 𝑓 is a gen-
eralized discrete Morse function as in Section 2.2.3. We first state and
prove an easy lemma.
Lemma 4.29. For every generalized discrete vector field, there is a (stan-
dard) discrete vector field that refines every non-singular, non-empty in-
terval into pairs.
Proof. Let [𝛼, 𝛽] be a non-singular and non-empty interval. Then 𝛼 <
𝛽. Hence, choose a vertex 𝑣 ∈ 𝛽 − 𝛼 and for every 𝛾 ∈ [𝛼, 𝛽] partition
[𝛼, 𝛽] into the pairs {𝛾 − {𝑣}, 𝛾 ∪ {𝑣}}. □
The technique employed in the proof of Lemma 4.29 is known as a
vertex refinement of the partition. This is simply a way to break each
interval of a generalized discrete vector field into elementary collapses.
The following corollary is immediate from Lemma 4.29 and Theo-
rem 4.27.
Corollary 4.30 (Generalized collapse theorem). Let 𝐾 be a simplicial
complex with generalized discrete vector field 𝑉, and let 𝐾 ′ ⊆ 𝐾 be a
subcomplex. If 𝐾 − 𝐾 ′ is a union of non-singular intervals in 𝑉, then
𝐾 ↘ 𝐾 ′.
Exercise 4.31. Give an example to show that in general the vertex re-
finement of Lemma 4.29 need not be unique.
Chapter 5
Discrete Morse theory
and persistent homology
This chapter introduces persistent homology, a powerful computational
tool with a wealth of applications [40, 44, 141, 149], including many in
data analysis [133]. Persistent homology was originally introduced by
Edelsbrunner, Letscher, and Zomorodian [56] in 2002, although certain
ideas in persistent homology can be found earlier [73], even in the work
of Morse himself [125].
In Section 5.1, we compute persistent homology without worrying
about its theoretical foundations. For those who would like to delve
more deeply into the theoretical relationship between discrete Morse
theory and persistent homology, we offer Section 5.2.4, where we follow
U. Bauer’s doctoral thesis [24] by using discrete Morse theory to develop
a theoretical framework for persistent homology.
5.1. Persistence with discrete Morse functions
Before we can begin performing persistent homology computations, we
introduce a new notion of equivalence of discrete Morse functions. We
saw in Section 2.1 one such notion of equivalence of discrete Morse func-
tions, defined in terms of their induced gradient vector field. Another
117
118 5. Discrete Morse theory and persistent homology
notion of equivalence is based on the homology of the induced level sub-
complexes.
5.1.1. Homological equivalence.
Example 5.1. Consider the discrete Morse function 𝑓 in Example 4.24.
This is an excellent discrete Morse function with critical values 0, 2, 3, 5,
6, 7, 10, and 11. For each of the level subcomplexes 𝐾(0), 𝐾(2), 𝐾(3), 𝐾(5),
𝐾(6), 𝐾(7), 𝐾(10), and 𝐾(11), recording the corresponding Betti numbers
yields the following homological sequence:
𝐵0 ∶ 1 2 3 2 1 1 1 1
𝐵1 ∶ 0 0 0 0 0 1 2 1
𝐵2 ∶ 0 0 0 0 0 0 0 0
Notice that only one value changes when moving from column to
column and that the last column is the homology of the original simplex
𝐾 even though 𝐾 ≠ 𝐾(11). These observations and others are true of
the homological sequence of any excellent discrete Morse function. We
prove this in Theorem 5.9.
Definition 5.2. Let 𝑓 be a discrete Morse function with 𝑚 critical val-
ues on an 𝑛-dimensional simplicial complex 𝐾. The homological se-
quence of 𝑓 is given by the 𝑛 + 1 functions
𝑓 𝑓 𝑓
𝐵0 , 𝐵1 , … , 𝐵𝑛 ∶ {0, 1, … , 𝑚 − 1} → ℕ ∪ {0}
𝑓
defined by 𝐵𝑘 (𝑖) ∶= 𝑏𝑘 (𝐾(𝑐𝑖 )) for all 0 ≤ 𝑘 ≤ 𝑛 and 0 ≤ 𝑖 ≤ 𝑚 − 1. We
𝑓
usually write 𝐵𝑘 (𝑖) for 𝐵𝑘 (𝑖) when the discrete Morse function 𝑓 is clear
from the context.
5.1. Persistence with discrete Morse functions 119
Problem 5.3. Let 𝑓 ∶ 𝐾 → ℝ be the discrete Morse function given by
2
3
1
12 14
13 11
6
8 7
9
10
0 5 4
Find the homological sequence of 𝑓.
Exercise 5.4. Suppose 𝑓 ∶ 𝐾 → ℝ is a perfect discrete Morse function.
Prove that for any 𝑘, 𝐵𝑘 (𝑖) ≤ 𝐵𝑘 (𝑖 + 1) for all 0 ≤ 𝑖 ≤ 𝑚 − 2.
Definition 5.5. Two discrete Morse functions 𝑓, 𝑔 ∶ 𝐾 → ℝ with 𝑚
𝑓 𝑔
critical values are homologically equivalent if 𝐵𝑘 (𝑖) = 𝐵𝑘 (𝑖) for all
0 ≤ 𝑘 ≤ 𝑚 − 1 and 0 ≤ 𝑖.
Example 5.6. Consider the two discrete Morse functions 𝑓 and 𝑔 on the
complex 𝐾 shown respectively on the left and right below.
1 7 8 5 6 0
2 12 13 7
6 10 4 1
3 11 11 8
5 0 7 8 13 12 2 3 2 9
120 5. Discrete Morse theory and persistent homology
It is an easy exercise to write down the level subcomplexes and com-
pute that both discrete Morse functions have the homological sequence
𝐵0 ∶ 1 2 1 1 1 2 1 1
𝐵1 ∶ 0 0 0 1 2 2 2 3
Hence 𝑓 and 𝑔 are homologically equivalent. However, if we pass to their
gradient vector fields
we see that 𝑉𝑓 ≠ 𝑉𝑔 and hence, by Proposition 2.53, 𝑓 and 𝑔 are not
Forman equivalent.
Exercise 5.7. Prove that if 𝑓 is a discrete Morse function, the flattening
𝑔 of 𝑓 is homologically equivalent to 𝑓. See the proof of Proposition 4.16
for the definition of the flattening.
Problem 5.8. Let 𝐾 be a simplicial complex with excellent discrete
Morse function 𝑓 and suppose that 𝑎 is the minimum value of 𝑓. Prove
that there exists a unique critical 0-simplex 𝜎 such that 𝑓(𝜎) = 𝑎.
Homologically equivalent discrete Morse functions were first intro-
duced and studied by Ayala et al. [6, 12] in the context of graphs with
infinite rays. They were further studied for orientable surfaces [11] and
for 2-dimensional collapsible complexes [5]. There is a version for per-
sistent homology [112] and for graph isomorphisms [1].
Excellent discrete Morse functions are particularity well behaved.
5.1. Persistence with discrete Morse functions 121
Theorem 5.9. Let 𝑓 be an excellent discrete Morse function on a con-
nected 𝑛-dimensional simplicial complex 𝐾 with 𝑚 critical values
𝑐0 , 𝑐1 , … , 𝑐𝑚−1 . Then each of the following holds:
(i) 𝐵0 (0) = 𝐵0 (𝑚 − 1) = 1 and 𝐵𝑑 (0) = 0 for all 𝑑 ∈ ℤ≥1 .
(ii) For all 0 ≤ 𝑖 < 𝑚 − 1, |𝐵𝑑 (𝑖 + 1) − 𝐵𝑑 (𝑖)| = 0 or 1 whenever
0 ≤ 𝑑 ≤ 𝑛 and 𝐵𝑑 (𝑖) = 0 whenever 𝑑 ≥ 𝑛 + 1.
(iii) 𝐵𝑑 (𝑚 − 1) = 𝑏𝑑 (𝐾) for all 𝑑 ∈ ℤ≥0 .
(iv) For each 𝑖 = 0, 1, … , 𝑚 − 2 and all 𝑝 ≥ 1, either
𝐵𝑝−1 (𝑖) = 𝐵𝑝−1 (𝑖 + 1) or
𝐵𝑝 (𝑖) = 𝐵𝑝 (𝑖 + 1).
In either case, 𝐵𝑑 (𝑖) = 𝐵𝑑 (𝑖 + 1) for any 𝑑 ≠ 𝑝, 𝑝 − 1 and 1 ≤
𝑑 ≤ 𝑛.
Proof. We proceed in order. For (i), choose 𝑦 ∈ ℕ such that 𝐾(𝑐𝑚−1 +
𝑦) = 𝐾. By Theorem 4.27, 𝑏0 (𝐾𝑐𝑚−1 ) = 𝑏0 (𝐾(𝑐𝑚−1 + 𝑦)) = 𝑏0 (𝐾). Since
𝐾 is connected, 𝑏0 (𝐾(𝑐𝑚−1 )) = 𝐵0 (𝑚 − 1) = 1. By Problem 5.8, 𝐾(0)
consists of a single 0-simplex. Thus 𝐵𝑑 (0) = 0 for all 𝑑 ∈ ℤ≥1 . This
proves the first assertion.
For (ii), we note that by Theorem 4.27, 𝑏𝑑 (𝐾(𝑐𝑖 )) = 𝑏𝑑 (𝐾(𝑥)) for any
𝑥 ∈ [𝑐𝑖 , 𝑐𝑖+1 ). Since 𝑓 is excellent, there exists 𝜖 > 0 such that 𝐾(𝑐𝑖+1 ) =
𝐾(𝑐𝑖+1 − 𝜖) ∪ 𝜎(𝑝) where 𝜎(𝑝) is a critical 𝑝-simplex such that 𝑓(𝜎(𝑝) ) =
𝑐𝑖+1 . We now apply Lemma 3.36 to each of the following cases: If 𝑝 = 𝑑,
then 𝐵𝑑 (𝑖 + 1) − 𝐵𝑑 (𝑖) = 0 or 1. If 𝑝 = 𝑑 + 1, then 𝐵𝑑 (𝑖 + 1) − 𝐵(𝑖) = −1
or 0. Otherwise, 𝐵𝑑 (𝑖 + 1) − 𝐵𝑑 (𝑖) = 0, which proves (ii).
For (iii), observe that 𝑚 − 1 is the maximum critical value. By The-
orem 4.27, 𝐵𝑑 is constant for all values 𝑥 > 𝑐𝑚−1 . Since there is a 𝑦 ∈ ℕ
such that 𝐾(𝑐𝑚−1 + 𝑦) = 𝐾, we see that 𝐵𝑑 (𝑚 − 1) = 𝑏𝑑 (𝐾).
Finally, we apply Theorem 4.27 to see that 𝑏𝑑 (𝐾(𝑐𝑖 )) = 𝑏𝑑 (𝐾(𝑥))
for all 𝑥 ∈ [𝑐𝑖 , 𝑐𝑖+1 ). Since 𝑓 is excellent, there exists 𝜖 > 0 such that
𝐾(𝑐𝑖+1 ) = 𝐾(𝑐𝑖+1 − 𝜖) ∪ 𝜎𝑝 as in the proof of (ii). Observe that, by Lemma
3.36, the addition of a 𝑝-dimensional simplex will change either 𝐵𝑝 or
𝐵𝑝−1 , leaving all other values fixed. □
122 5. Discrete Morse theory and persistent homology
Exercise 5.10. Find a discrete Morse function on the simplicial complex
that induces the homological sequence
𝐵0 ∶ 1 2 3 3 2 1 1
𝐵1 ∶ 0 0 0 1 1 1 0
Exercise 5.11. Give an example of a simplicial complex 𝐾 and a discrete
Morse function on 𝐾 that induces the following homological sequence:
𝐵0 ∶ 5 10 20 30 36 43
Why does this example not contradict Theorem 5.9?
5.1.2. Classical persistence. While the homological sequence does
give us important information about how the topology of the simpli-
cial complex changes with respect to a fixed discrete Morse function, we
5.1. Persistence with discrete Morse functions 123
may want more information. The following example shows what kind
of other information we may want.
Example 5.12. Suppose you are told that a discrete Morse function 𝑓
on some simplicial complex yields the following homological sequence:
𝑐0 𝑐1 𝑐2 𝑐3 𝑐4 𝑐5 𝑐6 𝑐7 𝑐8 𝑐9 𝑐10 𝑐11 𝑐12 𝑐13 𝑐14 𝑐15 𝑐16
𝐵0 ∶ 1 2 3 2 3 2 3 4 3 2 3 2 2 2 1 1 1
𝐵1 ∶ 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 3 2
𝐵2 ∶ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
The homological sequence is meant to give us summary informa-
tion about 𝑓. But we might want more information. We can see that
𝑏0 is fluctuating tremendously for quite a while. For example, at 𝑐9 we
see that we lose—or, more precisely, merge—a component. But which
component did we merge? The one introduced at critical value 𝑐7 ? Or
the one introduced at critical value 𝑐4 ? A similar question can be asked
of the cycles. The last critical value killed a cycle, but which one? The
one introduced at 𝑐12 , 𝑐13 , or 𝑐15 ? A homological sequence, while a nice
summary of a discrete Morse function, does not include any of this in-
formation. These questions lead to the idea of persistent homology. We
want to know not only what the homological sequence is, but also which
topological information seems to persist and which is just “noise.”
In Section 5.1.1, we saw how the sequence of level subcomplexes
induced by the critical values builds a simplicial complex in stages. This
is a special kind of filtration; if 𝐾 is a simplicial complex, a filtration
of 𝐾 is a sequence of subcomplexes
𝐾0 ⊆ 𝐾1 ⊆ ⋯ ⊆ 𝐾𝑚−1 .
In order to perform persistent homology computations, we work
with a filtration induced by a basic discrete Morse function (see Defi-
nition 2.3). Let 𝑓 ∶ 𝐾 → ℝ be a basic discrete Morse function. We will
forgo the definitions of persistent homology for now, as the technicali-
ties will be shown in all their gory detail in Section 5.2.4. For now, it suf-
fices to know that we can store all needed information about topology
124 5. Discrete Morse theory and persistent homology
change in a single, albeit giant, matrix. First, we will put a total ordering
induced by the basic discrete Morse function on the simplices. For any
two simplices 𝜎, 𝜏 ∈ 𝐾, if 𝑓(𝜎) < 𝑓(𝜏), define 𝜎 < 𝜏. If 𝑓(𝜎) = 𝑓(𝜏), then
define 𝜎 < 𝜏 if dim(𝜎) < dim(𝜏).1
Exercise 5.13. Show that the above defines a total ordering on 𝐾; that
is, for every 𝜎, 𝜏 ∈ 𝐾, either 𝜎 < 𝜏 or 𝜏 < 𝜎.
We will use this total ordering to organize our matrix. Let’s look at
an example.
Example 5.14. Let 𝐾 be the simplicial complex along with the basic
discrete Morse function below:
9
8
12
6 1 1 0
3 3
2
11
4 7 10
7 11
4 5 5
The reader can check that this discrete Morse function is basic. We
now construct our linear transformation that encodes all the persistent
information. The size of the matrix is determined by the number of sim-
plices of 𝐾, and since 𝐾 has 20 simplices, this will be a 20 × 20 matrix.
1
Note that if we associate to the simplex 𝜍 the ordered pair (𝑓(𝜍), dim(𝜍)), then essentially we
are defining a lexicographic ordering.
5.1. Persistence with discrete Morse functions 125
Let 𝜎𝑖 denote the simplex labeled 𝑖:
𝑣8
𝑒9
𝑒8
𝑓12
𝑒6 𝑣1 𝑒1 𝑣0
𝑣3 𝑒3
𝑣2
𝑓11
𝑒4 𝑒7 𝑒10
𝑒11
𝑓7
𝑣4 𝑒5 𝑣5
We place a 1 in entry 𝑎𝑖𝑗 if and only if 𝜎𝑖 is a codimension-1 face of
𝜏𝑗 . All other entries are 0. In practice, we have
𝑣0 𝑣1 𝑒1 𝑣2 𝑣3 𝑒3 𝑣4 𝑒4 𝑣5 𝑒5 𝑒6 𝑒7 𝑓7 𝑣8 𝑒8 𝑒9 𝑒10 𝑒11 𝑓11 𝑓12
𝑣0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
⎛ ⎞
𝑣1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
⎜ ⎟
𝑒1 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣2 ⎜0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 ⎟
𝑣3 ⎜0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 ⎟
𝑒3 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑣4 ⎜0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 ⎟
𝑒4 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
⎜ ⎟
𝑣5 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 0
⎜ ⎟
𝑒5 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
𝑒6 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ⎟
𝑒7 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
𝑓7 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣8 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 ⎟
𝑒8 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑒9 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
⎜ ⎟
𝑒10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
⎜ ⎟
𝑒11 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ⎟
𝑓11 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑓12 ⎝0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎠
126 5. Discrete Morse theory and persistent homology
For each 𝑗, define low(𝑗) to be the row index of the 1 in column 𝑗
with the property that any other 1 in column 𝑗 has row index strictly
less than low(𝑗). Otherwise, if column 𝑗 consists of all 0s, then low(𝑗)
is undefined. For example, low(6) = 5 since the lowest 1 in the column
6 (labeled 𝑒3 ) is found in row 5 (labeled by 𝑣3 ); low(9) (in the column
labeled 𝑣5 ) would be undefined. We will reduce the matrix above so that
it has the following property: whenever 𝑗 ≠ 𝑖 are two non-zero columns,
low(𝑖) ≠ low(𝑗). This is easy in practice by working left to right. Working
left to right, we see that everything is a-okay until we reach column 12
(labeled 𝑒7 ), in which case low(12) = low(10) = 9. We then simply add
columns 10 and 12 modulo 2, replacing column 12 with the result. This
yields
𝑣0 𝑣1 𝑒1 𝑣2 𝑣3 𝑒3 𝑣4 𝑒4 𝑣5 𝑒5 𝑒6 𝑒7 𝑓7 𝑣8 𝑒8 𝑒9 𝑒10 𝑒11 𝑓11 𝑓12
𝑣0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
⎛ ⎞
𝑣1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
⎜ ⎟
𝑒1 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣2 ⎜0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 ⎟
𝑣3 ⎜0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 ⎟
𝑒3 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑣4 ⎜0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 ⎟
𝑒4 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
⎜ ⎟
𝑣5 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0
⎜ ⎟
𝑒5 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
𝑒6 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ⎟
𝑒7 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
𝑓7 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣8 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 ⎟
𝑒8 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑒9 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
⎜ ⎟
𝑒10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
⎜ ⎟
𝑒11 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ⎟
𝑓11 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑓12 ⎝0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎠
5.1. Persistence with discrete Morse functions 127
But now notice that low(12) = low(8). Okay, repeat this same pro-
cess, this time adding columns 12 and 8, then replacing column 12 with
the result. Now column 12 is the zero column, so low(12) is undefined.
That’s no problem, so we move on. After reducing the matrix, you should
obtain
𝑣0 𝑣1 𝑒1 𝑣2 𝑣3 𝑒3 𝑣4 𝑒4 𝑣5 𝑒5 𝑒6 𝑒7 𝑓7 𝑣8 𝑒8 𝑒9 𝑒10 𝑒11 𝑓11 𝑓12
𝑣0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
⎛ ⎞
𝑣1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
⎜ ⎟
𝑒1 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣2 ⎜0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 ⎟
𝑣3 ⎜0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑒3 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑣4 ⎜0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 ⎟
𝑒4 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
⎜ ⎟
𝑣5 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
⎜ ⎟
𝑒5 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
𝑒6 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ⎟
𝑒7 ⎜0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ⎟
𝑓7 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣8 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ⎟
𝑒8 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑒9 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
⎜ ⎟
𝑒10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
⎜ ⎟
𝑒11 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ⎟
𝑓11 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑓12 ⎝0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎠
Now we need to interpret the matrix. The value low(𝑖) is key. It
identifies the point at which a simplex generated a Betti number, and it
also tells us when that Betti number died. It will help to think of the
values of the basic discrete Morse function as units of time. Take the
situation low(11) = 4, corresponding to the column and row labeled 𝑒6
and 𝑣2 , respectively. This means that at time 2, 𝑣2 generated a new Betti
number. Since 𝑣2 is a vertex, it generated a new component. However,
its life was cut short: the edge 𝑒6 killed it at time 6. Thus, a component
was born at time 2 and died at time 6. Consider another example where
128 5. Discrete Morse theory and persistent homology
low(20) = 16, corresponding to 𝑓12 and 𝑒9 . Using the same interpretation
scheme, 𝑒9 generated homology at time 9 and died at time 12 at the hands
of the face 𝑓12 . We can make this analysis for any column for which
low(𝑖) exists.
What about a column where low(𝑖) is not defined, such as the col-
umn for 𝑣2 ? We need not worry about that particular column, since we
saw above that 𝑣2 was already born and killed. There are also columns
that satisfy low(𝑖) = 𝑖, such as column 6. This corresponds to a regu-
lar pair, i.e., a simultaneous birth and death. Since there is no topol-
ogy change at such instances, we can ignore them. Finally, there will be
some columns (in this case 𝑣0 and 𝑒10 ) for which low(𝑖) is not defined and
which we never detected being born through an investigation of the low
function. These correspond to homology that is born but never dies—it
persists until the end. Such homology is born at its index and never dies.
Thus column 𝑣0 corresponds to a component generated by 𝑣0 that never
dies, and column 𝑒11 corresponds to a cycle generated by 𝑒11 that never
dies. This makes perfect sense, as in the end we expect homology that
persists to be precisely the homology of the original complex. All of the
birth and death information is summed up in the following bar code:
𝑒10
𝑏1
𝑒9
𝑏0 𝑣2
𝑣0
0 1 2 3 4 5 6 7 8 9 10 11 12
Critical values
5.1. Persistence with discrete Morse functions 129
The solid bars represent a component (i.e., 𝑏0 ), while the dashed
lines represent a cycle (i.e., 𝑏1 ).
Notice that given a bar code induced by a basic discrete Morse func-
tion 𝑓, we may recover the homological sequence of 𝑓 by drawing a ver-
tical line at each critical value 𝑐𝑗 . The number of times the vertical line
intersects a horizontal bar corresponding to 𝑏𝑖 (other than a death time)
is precisely 𝐵𝑖 (𝑗).
Example 5.15. Now we will use persistent homology to investigate the
discrete Morse function that induced the homological sequence in Ex-
ample 5.12. The homological sequence was induced by the following
discrete Morse function:
10
11
12
0 15 4 9
7
16
3 13 14 8
2 5 1 6
Note that this discrete Morse function is basic but has no regular
simplices. As above, we determine its boundary matrix according to the
rule that if 𝜎𝑖 is a simplex of 𝐾 representing the column indexed by 𝑖,
130 5. Discrete Morse theory and persistent homology
we place a 1 in entry 𝑎𝑖𝑗 if and only if 𝜎𝑖 is a proper face of 𝜏𝑗 . All other
entries are 0.
𝑣0 𝑣1 𝑣2 𝑒3 𝑣4 𝑒5 𝑣6 𝑣7 𝑒8 𝑒9 𝑣10 𝑒11 𝑒12 𝑒13 𝑒14 𝑒15 𝑓16
𝑣0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0
⎛ ⎞
𝑣1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0
⎜ ⎟
𝑣2 ⎜ 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑒3 ⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣4 ⎜ 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 1 0 ⎟
𝑒5 ⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣6 ⎜ 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 ⎟
𝑣7 ⎜ 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 ⎟
𝑒8 ⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑒9 ⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣10 ⎜ 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 ⎟
⎜ ⎟
𝑒11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
⎜ ⎟
𝑒12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
⎜ ⎟
𝑒13 ⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑒14 ⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑒15 ⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑓16 ⎝ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎠
5.1. Persistence with discrete Morse functions 131
Now reduce this matrix as specified above:
𝑣0 𝑣1 𝑣2 𝑒3 𝑣4 𝑒5 𝑣6 𝑣7 𝑒8 𝑒9 𝑣10 𝑒11 𝑒12 𝑒13 𝑒14 𝑒15 𝑓16
𝑣0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0
⎛ ⎞
𝑣1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0
⎜ ⎟
𝑣2 ⎜0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑒3 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣4 ⎜0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 ⎟
𝑒5 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣6 ⎜0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 ⎟
𝑣7 ⎜0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 ⎟
𝑒8 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑒9 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟
𝑣10 ⎜0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ⎟
⎜ ⎟
𝑒11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
⎜ ⎟
𝑒12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
⎜ ⎟
𝑒13 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑒14 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑒15 ⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ⎟
𝑓16 ⎝0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎠
The bar code is thus
𝑒15
𝑏1 𝑒13
𝑒12
𝑣10
𝑣7
𝑣6
𝑏0 𝑣4
𝑣2
𝑣1
𝑣0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Critical values
132 5. Discrete Morse theory and persistent homology
Problem 5.16. Find the barcode for the discrete Morse function in Prob-
lem 5.3.
Problem 5.17. Compute the barcode of
9 0
9 7
1 5 5
3 2
Remark 5.18. In the field of statistics, we know that different modes
of presentation can yield varying insights into the same data. This data
can be displayed as a histogram, pie chart, or some other graphical form,
and sometimes you can see a trend in one graphic display that you can’t
see in another. Likewise, a bar code is one nice graphical representation
of the data from our reduced matrix, but this is not our only option. An-
other option is to display our data as a persistence diagram. To do this,
we work in the first quadrant of ℝ2 . We begin by drawing the diagonal
consisting of all points (𝑥, 𝑥). The data points given here have first co-
ordinate the birth time and second coordinate the death time. A Betti
number that is born and never dies is a point at infinity and is plotted
at the very top of the diagram. For example, the bar code found in Ex-
ample 5.14 has corresponding persistence diagram given below, where
a solid dot is a component and an open dot is a cycle.
5.1. Persistence with discrete Morse functions 133
∞
12
11
10
0
0 1 2 3 4 5 6 7 8 9 10 11 12
Part of the interpretation of the persistence diagram is that points
farther away from the diagonal are more significant, while points closer
to the diagonal tend to be considered noise. We will look at persistence
diagrams in more detail in Section 5.2.4.
Exercise 5.19. In a persistence diagram, is it ever possible to plot a point
below the diagonal? Why or why not?
Problem 5.20. Find the persistence diagram for the discrete Morse func-
tion in Problem 5.3.
Problem 5.21. Compute the persistence diagram of the complex in Prob-
lem 5.17.
134 5. Discrete Morse theory and persistent homology
5.2. Persistent homology of discrete Morse functions
In the previous section, we used persistent homology to study the in-
duced filtration of a discrete Morse function. In this section, we are in-
terested in building the framework of persistent homology from discrete
Morse theory. I am indebted to Uli Bauer for this concept. The material
in this section is based on part of his masterful PhD thesis [24].
5.2.1. Distance between discrete Morse functions. There are sev-
eral natural notions of a distance between two functions into ℝ. The one
we will use is called the uniform norm. Let 𝑓 ∶ 𝐾 → ℝ be any function
on a finite set 𝐾 (in particular, a discrete Morse function on a simplicial
complex). The uniform norm of 𝑓, denoted by ‖𝑓‖∞ , is defined by
‖𝑓‖∞ ∶= max{|𝑓(𝜎)| ∶ 𝜎 ∈ 𝐾}.
Exercise 5.22. Compute ‖𝑓‖∞ for 𝑓 the discrete Morse functions de-
fined in Problems 5.3 and 5.17.
Using the uniform norm, we can define a distance between discrete
Morse functions on a fixed simplicial complex in the following way: Let
𝑓, 𝑔 ∶ 𝐾 → ℝ be discrete Morse functions. The distance between 𝑓 and
𝑔 is defined by 𝑑(𝑓, 𝑔) ∶= ‖𝑓 − 𝑔‖∞ . Any good theory of distance will
satisfy the four properties found in the following proposition:
Proposition 5.23. Let 𝑓, 𝑔 ∶ 𝐾 → ℝ be discrete Morse functions. Then
(i) 𝑑(𝑓, 𝑔) = 0 if and only if 𝑓 = 𝑔;
(ii) 𝑑(𝑓, 𝑔) ≥ 0;
(iii) 𝑑(𝑓, 𝑔) = 𝑑(𝑔, 𝑓);
(iv) 𝑑(𝑓, ℎ) + 𝑑(ℎ, 𝑔) ≥ 𝑑(𝑓, 𝑔).
Proof. Suppose that 𝑑(𝑓, 𝑔) = 0. Then 0 = max{|𝑓(𝜎) − 𝑔(𝜎)| ∶ 𝜎 ∈ 𝐾}
so that |𝑓(𝜎) − 𝑔(𝜎)| = 0 for all 𝜎 ∈ 𝐾. Hence 𝑓 = 𝑔. Conversely, if
𝑓 = 𝑔, then 𝑑(𝑓, 𝑔) = max{|𝑓(𝜎) − 𝑓(𝜎)| ∶ 𝜎 ∈ 𝐾} = 0. For (ii), since
5.2. Persistent homology of DMF 135
|𝑓(𝜎)−𝑔(𝜎)| ≥ 0, 𝑑(𝑓, 𝑔) ≥ 0. For (iii), since |𝑓(𝜎)−𝑔(𝜎)| = |𝑔(𝜎)−𝑓(𝜎)|,
we have that 𝑑(𝑓, 𝑔) = 𝑑(𝑔, 𝑓). Finally, the standard triangle inequality
tells us that
|𝑓(𝜎) − 𝑔(𝜎)| ≤ |𝑓(𝜎) − ℎ(𝜎)| + |ℎ(𝜎) − 𝑔(𝜎)|;
hence 𝑑(𝑓, 𝑔) ≤ 𝑑(𝑓, ℎ) + 𝑑(ℎ, 𝑔). □
We will use distances between discrete Morse functions at the end
of Section 5.2.3 and in the main results of Section 5.2.4.
Example 5.24. Let 𝑓, 𝑔 ∶ 𝐾 → ℝ be two discrete Morse functions, and
define 𝑓𝑡 ∶= (1 − 𝑡)𝑓 + 𝑡𝑔 for all 𝑡 ∈ [0, 1]. To gain a little bit of prac-
tice manipulating this distance, we will show by direct computation that
‖𝑓𝑟 − 𝑓𝑠 ‖∞ = |𝑠 − 𝑟|‖𝑓 − 𝑔‖ for all 𝑠, 𝑡 ∈ [0, 1]. We have
‖𝑓𝑟 − 𝑓𝑠 ‖ = max{|𝑓𝑟 (𝜎) − 𝑓𝑠 (𝜎)| ∶ 𝜎 ∈ 𝐾}
= max{|𝑓(𝜎)[(1 − 𝑟) − (1 − 𝑠)] − 𝑔(𝜎)[𝑠 − 𝑟]| ∶ 𝜎 ∈ 𝐾}
= max{|𝑠 − 𝑟||𝑓(𝜎) − 𝑔(𝜎)| ∶ 𝜎 ∈ 𝐾}
= |𝑠 − 𝑟| max{|𝑓(𝜎) − 𝑔(𝜎)| ∶ 𝜎 ∈ 𝐾}
= |𝑠 − 𝑟|‖𝑓 − 𝑔‖∞ .
5.2.2. Pseudo-discrete Morse functions. Given a discrete Morse
function 𝑓, we know from Section 2.2 that 𝑓 induces a unique gradi-
ent vector field 𝑉𝑓 . On the one hand, this is a good thing, since there is
no ambiguity about what gradient vector field is associated with a par-
ticular discrete Morse function. On the other hand, the fact that we only
have one gradient vector field implies a certain strictness of the defini-
tion of a discrete Morse function. In an attempt to relax the definition of
a discrete Morse function so that it can induce multiple gradient vector
fields, we introduce the concept of a pseudo-discrete Morse function.
Definition 5.25. A function 𝑓 ∶ 𝐾 → ℝ is called a pseudo-discrete
Morse function if there is a gradient vector field 𝑉 such that whenever
𝜎(𝑝) < 𝜏(𝑝+1) , the following conditions hold:
• (𝜎, 𝜏) ∉ 𝑉𝑓 implies 𝑓(𝜎) ≤ 𝑓(𝜏), and
• (𝜎, 𝜏) ∈ 𝑉𝑓 implies 𝑓(𝜎) ≥ 𝑓(𝜏).
Any such 𝑉 and 𝑓 are called consistent.
136 5. Discrete Morse theory and persistent homology
A good strategy for putting a pseudo-discrete Morse function on a
simplicial complex is to start with the gradient vector field you want and
then label the simplices accordingly.
Exercise 5.26. Give a definition of a discrete Morse function in the style
of Definition 5.25.
As we will see, pseudo-discrete Morse functions can look quite dif-
ferent from discrete Morse functions.
Example 5.27. Let 𝐺 be the simplicial complex below and 𝑓 ∶ 𝐺 → ℝ
the labeling
2
2 2
2 2
2
To confirm that this is a pseudo-discrete Morse function, we need
to find a gradient vector field 𝑉 such that (𝜎, 𝜏) ∉ 𝑉𝑓 implies 𝑓(𝜎) ≤
𝑓(𝜏) and (𝜎, 𝜏) ∈ 𝑉𝑓 implies 𝑓(𝜎) ≥ 𝑓(𝜏) whenever 𝜎(𝑝) < 𝜏(𝑝+1) . The
following gradient vector field satisfies this property:
So does this gradient vector field:
5.2. Persistent homology of DMF 137
In fact, any gradient vector field on 𝐺 is consistent with 𝑓. More
generally, for any simplicial complex 𝐾, the function 𝑓 ∶ 𝐾 → ℝ defined
by 𝑓(𝜎) = 2 for all 𝜎 ∈ 𝐾 is a pseudo-discrete Morse function consistent
with all possible gradient vector fields on 𝐾. In this case, it is interesting
to ask how many such gradient vector fields are consistent with the con-
stant function. We will give a partial answer to this question in Chapter
7 in the case where 𝐾 is 1-dimensional or a graph.
By contrast, Problem 5.29 has you investigate when a vector field
consistent with a pseudo-discrete Morse function is unique.
Problem 5.28. Show that a pseudo-discrete Morse function 𝑓 is flat if
and only if it is consistent with the empty vector field.
Problem 5.29. Find and prove a characterization for when a pseudo-
discrete Morse function has a unique gradient vector field.
What would something that is not a pseudo-discrete Morse function
look like?
Problem 5.30. Consider the following simplicial complex 𝐾 with label-
ing 𝑓 ∶ 𝐾 → ℝ:
7 3 2
3
6 1
4
7 8 0
Show that this is not a pseudo-discrete Morse function.
In Section 4.1.3, we saw that a linear combination of flat discrete
Morse functions yielded another flat discrete Morse function. This also
works for pseudo-discrete Morse functions, so that we may obtain new
pseudo-discrete Morse functions from old.
Lemma 5.31. Let 𝑓 and 𝑔 be pseudo-discrete Morse functions consis-
tent with a gradient vector field 𝑉, and let 𝑡1 , 𝑡2 ≥ 0 be real numbers.
Then 𝑡1 𝑓 + 𝑡2 𝑔 is a pseudo-discrete Morse function consistent with 𝑉.
Problem 5.32. Prove Lemma 5.31.
138 5. Discrete Morse theory and persistent homology
One thing that a pseudo-discrete Morse function does for us, similar
to a basic discrete Morse function, is to induce a strict total order on
the set of simplices, that is, a relation ≺ on 𝐾 satisfying the following
three properties:
(a) Irreflexive: for all 𝜎 ∈ 𝐾, 𝜎 ⊀ 𝜎.
(b) Asymmetric: if 𝜎 ≺ 𝜏, then 𝜏 ⊀ 𝜎.
(c) Transitive: 𝜎 ≺ 𝜏 and 𝜏 ≺ 𝛾 implies 𝜎 ≺ 𝛾.
The order is total in the sense that for any 𝜎, 𝜏 ∈ 𝐾, either 𝜎 ≺ 𝜏 or
𝜏 ≺ 𝜎. Simply put, all simplices are comparable. We will build the total
order from a partial order.
Definition 5.33. Let 𝑉 be a gradient vector field on 𝐾. Define a relation
←𝑉 on 𝐾 such that whenever 𝜎(𝑝) < 𝜏(𝑝+1) , the following hold:
(a) if (𝜎, 𝜏) ∉ 𝑉, then 𝜎 ←𝑉 𝜏;
(b) if (𝜎, 𝜏) ∈ 𝑉, then 𝜏 ←𝑉 𝜎.
Let ≺𝑉 be the transitive closure of ←𝑉 . Then ≺𝑉 is the strict partial
order induced by 𝑉.
The transitive closure of a relation is a new relation that forces tran-
sitivity by fiat; i.e., if 𝑎 < 𝑏 and 𝑏 < 𝑐, the transitive closure <′ has, by
definition, 𝑎 <′ 𝑐.
Example 5.34. Consider the gradient vector field 𝑉 from Example 2.48:
𝑣0 𝑣4
𝑒2 𝑒4
𝑓1 𝑒3
𝑒0 𝑒5
𝑣2 𝑣3
𝑒1 𝑒6
𝑣1 𝑣5
Let’s begin to investigate ≺𝑉 by taking pairs of simplices and asking
if they are related. For example, consider 𝑣3 and 𝑒4 . Since (𝑣3 , 𝑒4 ) ∈ 𝑉,
this means that 𝑒4 ←𝑉 𝑣3 so that 𝑒4 ≺𝑉 𝑣3 . We also have that (𝑣4 , 𝑒4 ) ∉
𝑉 so 𝑣4 ←𝑉 𝑒4 , and hence 𝑣4 ≺𝑉 𝑒4 . Since ≺𝑉 is defined to be the
5.2. Persistent homology of DMF 139
transitive closure of ←𝑉 , we get transitive relations for free; i.e., 𝑣4 ≺𝑉
𝑣3 . Continuing in this manner, we obtain a partial order on all of 𝐾 (See
Problem 5.35).
Problem 5.35. Draw the Hasse diagram showing the partial order rela-
tions induced by 𝑉 from Example 5.34.
Problem 5.36. Show that Definition 5.33 yields a strict partial order on
the set of simplices of a fixed simplicial complex 𝐾.
As you might have realized, the way to think about the partial order
≺𝑉 is that as the partial ordering increases, so do the values of 𝑓.
Proposition 5.37. If 𝛼 ≺𝑉 𝛽, then 𝑓(𝛼) ≤ 𝑓(𝛽).
Proof. Suppose 𝛼 ≺𝑉 𝛽. Then there exists a sequence
𝛼 = 𝛼𝑛 ←𝑉 𝛼𝑛−1 ←𝑉 ⋯ ←𝑉 𝛼0 = 𝛽.
For any pair 𝛼𝑖 ←𝑉 𝛼𝑖−1 above, the definition of ←𝑉 implies that
either
(a) (𝛼𝑖 , 𝛼𝑖−1 ) ∉ 𝑉 with 𝛼𝑖 < 𝛼𝑖−1 or
(b) (𝛼𝑖−1 , 𝛼𝑖 ) ∈ 𝑉 with 𝛼𝑖−1 < 𝛼𝑖 .
If the former case, then the definition of a discrete Morse function
implies that 𝑓(𝛼𝑖 ) < 𝑓(𝛼𝑖−1 ). If the latter, then the definition of being
an element of the gradient vector field implies that 𝑓(𝛼𝑖−1 ) ≥ 𝑓(𝛼𝑖 ). In
either case, we have that 𝑓(𝛼𝑖−1 ) ≥ 𝑓(𝛼𝑖 ) for all 𝑖, and hence 𝑓(𝛼) ≤
𝑓(𝛽). □
In addition to ≺𝑉 , we have another partial order, denoted by ≺𝑓 .
This one is induced by a pseudo-discrete Morse function 𝑓 ∶ 𝐾 → ℝ and
is defined by 𝛼 ≺𝑓 𝛽 if and only if 𝑓(𝛼) < 𝑓(𝛽) for any simplices (not
necessarily codimension-1 pairs) 𝛼, 𝛽 ∈ 𝐾. If there are no two simplices
𝜎 and 𝜏 such that 𝜎 ≺𝑉 𝜏 and 𝜏 ≺𝑓 𝜎, we say that the two orders are
consistent. We may now use the consistent orders as the basis of a strict
total order on the simplices of 𝐾.
Definition 5.38. A linear extension of a poset (𝑃, 𝑉) is a permutation
of all the elements 𝑝1 , 𝑝2 , … , 𝑝𝑚 ∈ 𝑃 such that if 𝑝𝑖 < 𝑝𝑗 , then 𝑖 < 𝑗. Let
𝑓 ∶ 𝐾 → ℝ be a pseudo-discrete Morse function and 𝑉 a gradient vector
140 5. Discrete Morse theory and persistent homology
field consistent with 𝑓. A strict total order ≺ consistent with 𝑓 and
𝑉 is a linear extension of ≺𝑉 (and hence ≺𝑓 ).
A linear extension of ≺𝑉 is really just a choice of total ordering that
respects the partial ordering of ≺𝑉 . We illustrate with an example.
Example 5.39. Let 𝑓 ∶ 𝐾 → ℝ be the pseudo-discrete Morse function
below.
8 = 𝑓(𝑤)
6 = 𝑓(𝑣) 3 = 𝑓(𝑢)
6
One possible 𝑉 consistent with 𝑓 is
and the corresponding partial order is seen in the Hasse diagram
𝑣B 𝑤
BB {
BB {{
BB {{{
B {{
𝑢𝑣 B 𝑢𝑤
BB {
BB {{
BB {{{
B {{
𝑢
A linear extension ≺ is simply a total ordering that respects the
above. Several are listed below:
• 𝑢 ≺ 𝑢𝑣 ≺ 𝑣 ≺ 𝑢𝑤 ≺ 𝑤
• 𝑢 ≺ 𝑢𝑤 ≺ 𝑤 ≺ 𝑢𝑣 ≺ 𝑣
5.2. Persistent homology of DMF 141
• 𝑢 ≺ 𝑢𝑣 ≺ 𝑢𝑤 ≺ 𝑤 ≺ 𝑣
• 𝑢 ≺ 𝑢𝑤 ≺ 𝑢𝑣 ≺ 𝑤 ≺ 𝑣
Using our new language of a total order ≺ consistent with a vec-
tor field 𝑉 and pseudo-discrete Morse function 𝑓, we will find it useful
to rephrase Theorem 4.27 (the collapse theorem). If 𝑓(𝛼) = 𝑐, write
𝐾(𝛼) ∶= 𝐾(𝑐), the level subcomplex containing all simplices whose val-
ues are less than or equal to that of the value of 𝛼 under 𝑓.
Theorem 5.40 (Theorem 4.27 rephrased). Let 𝑉 be a gradient vector
field consistent with a pseudo-discrete Morse function 𝑓, and let ≺ be
a linear extension of ≺𝑉 . If 𝛼 ≺ 𝛽 and there are no critical simplices 𝛾
with 𝛼 ≺ 𝛾 ≺ 𝛽, then 𝐾(𝛽) ↘ 𝐾(𝛼).
5.2.3. Flat pseudo-discrete Morse functions. Recall from Section
4.1.3 that a discrete Morse function 𝑓 is called flat if whenever (𝜎, 𝜏) is
a regular pair, 𝑓(𝜎) = 𝑓(𝜏). We now define a flat pseudo-discrete Morse
function in terms of the gradient vector field.
Definition 5.41. Let 𝑓 ∶ 𝐾 → ℝ be a pseudo-discrete Morse function
consistent with a gradient vector field 𝑉. We call 𝑓 flat if whenever
𝜎(𝑝) < 𝜏(𝑝+1) , we have that
• if (𝜎, 𝜏) ∉ 𝑉, then 𝑓(𝜎) ≤ 𝑓(𝜏);
• if (𝜎, 𝜏) ∈ 𝑉, then 𝑓(𝜎) = 𝑓(𝜏).
The solution to Problem 5.30 required you to compute a “minimal”
set of arrows on the simplicial complex. In general, we can define this
for any pseudo-discrete Morse function.
Definition 5.42. Let 𝑓 be a pseudo-discrete Morse function. Define
𝑉 ∶= {(𝜎, 𝜏) ∶ 𝜎(𝑝) < 𝜏(𝑝+1) and 𝑓(𝜎) > 𝑓(𝜏)}
to be the minimal gradient vector field consistent with 𝑓.
The adjective “minimal” is justified by the following:
Proposition 5.43. The minimal gradient vector field consistent with 𝑓
is minimal in the sense that it is the intersection of all gradient vector
fields consistent with 𝑓.
142 5. Discrete Morse theory and persistent homology
Proof. Let 𝑉 ′ = ⋂ 𝑉𝑖 be the intersection over all gradient vector fields
𝑉𝑖 consistent with 𝑓. Clearly 𝑉 ′ ⊆ 𝑉 for any gradient vector field 𝑉
consistent with 𝑓. Suppose (𝜎, 𝜏) ∈ 𝑉. Then by definition, 𝑓(𝜎) > 𝑓(𝜏).
Hence, (𝜎, 𝜏) ∈ 𝑉𝑖 for all gradient vector fields 𝑉𝑖 consistent with 𝑓, using
the contrapositive of the first line of Definition 5.25. Thus the result
follows. □
Example 5.44. If 𝑓 is the pseudo-discrete Morse function defined by
6 4 5
5 5
4 6
10 7 12 13 3
9 12 2
8 9
0 7 1
then its minimal consistent gradient vector field is given by
The vectors above are precisely the ones that must be there. In other
words, they correspond to regular pairs (𝜎, 𝜏) satisfying 𝑓(𝜎) > 𝑓(𝜏).
Exercise 5.45. Find all gradient vector fields consistent with the pseudo-
discrete Morse function from Example 5.44. Verify that the minimal gra-
dient vector field computed in that example is contained in all the vector
fields you find.
Equipped with a well-defined gradient vector field consistent with a
pseudo-discrete Morse function, we can now define a “flattening” oper-
ation on any pseudo-discrete Morse function, that is, an operation yield-
ing a flat pseudo-discrete Morse function with the same level subcom-
plexes as the original. If 𝑓 is a pseudo-discrete Morse function and 𝑉
the minimal gradient vector field consistent with 𝑓, the flattening of 𝑓,
5.2. Persistent homology of DMF 143
denoted by 𝑓,̄ is defined by
̄ ∶= {𝑓(𝜏) if (𝜎, 𝜏) ∈ 𝑉 for some 𝜏,
𝑓(𝜎)
𝑓(𝜎) otherwise.
Proposition 5.46. Let 𝑓 ̄ be the flattening of the pseudo-discrete Morse
function 𝑓 ∶ 𝐾 → ℝ. Then 𝑓 and 𝑓 ̄ have the same set of critical val-
ues 𝑐0 , … , 𝑐𝑛 . Furthermore, 𝐾𝑓 (𝑐𝑖 ) = 𝐾𝑓 ̄(𝑐𝑖 ) for all critical values 𝑐𝑖 . In
particular, 𝑓 and 𝑓 ̄ are homologically equivalent.
Problem 5.47. Prove Proposition 5.46.
Exercise 5.48. Call 𝑓 a pure pseudo-discrete Morse function if 𝑓 is a
pseudo-discrete Morse function and not a discrete Morse function. If
𝑓 is a pure pseudo-discrete Morse function, is 𝑓 ̄ always a pure pseudo-
discrete Morse function, a discrete Morse function, or neither?
Flattening makes discrete Morse functions “closer” to each other in
the following sense.
Theorem 5.49. Let 𝑓, 𝑔 ∶ 𝐾 → ℝ be pseudo-discrete Morse functions,
and let 𝑓 ̄ and 𝑔̄ be their respective flattenings. Then ‖𝑓−
̄ 𝑔‖̄ ∞ ≤ ‖𝑓−𝑔‖∞ .
Proof. Let 𝜏 ∈ 𝐾 and let 𝑉 be a gradient vector field consistent with 𝑓.
̄ − 𝑔(𝜏)‖
We will show that ‖𝑓(𝜏) ̄ ∞ ≤ ‖𝑓 − 𝑔‖∞ . If 𝜏 is critical for 𝑉, set
𝜎 = 𝜏. Otherwise, let 𝜎 be such that (𝜎, 𝜏) ∈ 𝑉. In a similar way to 𝜎,
define 𝜙 for 𝑔.
̄ = 𝑓(𝜏) and 𝑓(𝜙)
By definition of flattening, we have 𝑓(𝜏) ̄ = 𝑓(𝜙).
̄
By definition of a pseudo-discrete Morse function, 𝑓(𝜏) ̄
≥ 𝑓(𝜙). Then
̄ = max{𝑓(𝜎), 𝑓(𝜙)}. Similarly, we have 𝑔(𝜏)
𝑓(𝜏) ̄ = max{𝑔(𝜎), 𝑔(𝜙)}. By
Problem 5.50, we have
̄ − 𝑔(𝜏)|
|𝑓(𝜏) ̄ ≤ | max{𝑓(𝜎), 𝑓(𝜙)} − max{𝑔(𝜎), 𝑔(𝜙)}|
≤ max{|𝑓(𝜎) − 𝑔(𝜎)|, |𝑓(𝜙) − 𝑔(𝜙)|}
≤ ‖𝑓 − 𝑔‖∞ . □
Problem 5.50. Prove that if 𝑎, 𝑏, 𝑐, 𝑑 ∈ ℝ, then
| max{𝑎, 𝑏} − max{𝑐, 𝑑}| ≤ max{|𝑎 − 𝑐|, |𝑏 − 𝑑|}.
144 5. Discrete Morse theory and persistent homology
5.2.4. Persistence diagrams of pseudo-discrete Morse functions.
In Section 5.1.2, we examined a theory of persistent homology applicable
to certain kinds of filtrations of simplicial complexes. From this point of
view, there is nothing particularly “discrete Morse theory” about persis-
tent homology. Certain discrete Morse functions happen to give us the
right kind of filtration, so finding a discrete Morse function is just one of
many ways that we could perform a persistent homology computation.
By contrast, this section is devoted to constructing a theory of persis-
tence based exclusively on a pseudo-discrete Morse function. We begin
by recasting the language of persistent homology for a pseudo-discrete
Morse function.
We know by Theorem 5.40 that, topologically speaking, nothing in-
teresting happens between regular simplices or, to put it in a positive
light, the action happens precisely at the critical simplices. Hence, let 𝜎
and 𝜏 be critical simplices with 𝜎 ≺ 𝜏 (there may be other critical sim-
plices between them). Then there is an inclusion function 𝑖𝜍,𝜏 ∶ 𝐾(𝜎) →
𝐾(𝜏) defined by 𝑖𝜍,𝜏 (𝛼) = 𝛼. Now we know from Section 2.53 that to any
𝐾(𝜎) we can associate the vector space 𝐻𝑝 (𝐾(𝜎)). Again, as in Section
2.53, if 𝛼(𝑝) ∈ 𝐾(𝜎), we think of 𝛼 as a basis element in 𝕜|𝐾𝑝 (𝜍)| . Then
[𝛼] ∈ 𝐻𝑝 (𝐾(𝜎)). Furthermore, we may use 𝑖𝜍,𝜏 to define a linear trans-
formation 𝑖𝑝𝜍,𝜏 ∶ 𝐻𝑝 (𝐾(𝜎)) → 𝐻𝑝 (𝐾(𝜏)) by 𝑖𝑝𝜍,𝜏 ([𝛼]) = [𝑖𝑝𝜍,𝜏 (𝛼)].
The 𝑝th persistent homology vector space, denoted by 𝐻𝑝𝜍,𝜏 , is
defined by 𝐻𝑝𝜍,𝜏 ∶= Im(𝑖𝑝𝜍,𝜏 ). The 𝑝th persistent Betti numbers are
the corresponding Betti numbers, 𝛽𝑝𝜍,𝜏 ∶= rank 𝐻𝑝𝜍,𝜏 . Denote by 𝜎_ the
predecessor in the total ordering given by ≺. If [𝛼] ∈ 𝐻𝑝 (𝐾(𝜎)), we say
that [𝛼] is born at the positive simplex 𝜎 if [𝛼] ∉ 𝐻𝑝𝜍_,𝜍 , and that it
dies at negative simplex 𝜎 if 𝑖𝑝𝜍,𝜏_ ([𝛼]) ∉ 𝐻𝑝𝜍_𝜏_ but 𝑖𝑝𝜍,𝜏 ([𝛼]) ∈ 𝐻𝑝𝜍_,𝜏 .
If the class [𝛼] is born at 𝜎 and dies at 𝜏, we call (𝜎, 𝜏) a persistence
pair. The difference 𝑓(𝜎) − 𝑓(𝜏) is the persistence of (𝜎, 𝜏). If a positive
simplex 𝜎 is not paired with any negative simplex 𝜏 (i.e., 𝜎 is born and
never dies), then 𝜎 is called essential or a point at infinity.
Although these definitions are quite technical, we have done all of
this before in Section 5.1.2. At this point, it might help to go back to one
of the concrete computations in that section to see if you can understand
how the above definition is precisely that computation.
5.2. Persistent homology of DMF 145
We will start by defining the persistence diagram of a pseudo-
discrete Morse function. Recall that a multiset on a set 𝑆 is an or-
dered pair (𝑆, 𝑚) where 𝑚 ∶ 𝑆 → ℕ is a function recording the mul-
tiplicity of each element of 𝑆. In other words, a multiset is a set that
allows multiple occurrences of the same value.
Definition 5.51. Let 𝑓 ∶ 𝐾 → ℝ be a pseudo-discrete Morse function,
and write ℝ ∶= ℝ ∪ {∞, −∞}. The persistence diagram of 𝑓, denoted
2
by 𝐷(𝑓), is the multiset on ℝ containing one instance of (𝑓(𝜎), 𝑓(𝜏))
for each persistence pair (𝜎, 𝜏) of 𝑓, one instance of (𝑓(𝜎), ∞) for each
essential simplex 𝜎, and each point on the diagonal with countably infi-
nite multiplicity.
We saw an explicit example of a persistence diagram in Remark 5.18.
Again, it may help to refer back to see how the remark and Definition
5.51 relate to each other.
One concern that might be raised at this point is that it seems that
the persistence diagram 𝐷(𝑓) depends not only on the choice of pseudo-
discrete Morse function 𝑓 and gradient vector field 𝑉, but also on the
choice of total order ≺. Happily, 𝐷(𝑓) is dependent only on 𝑓, as the
following theorem shows:
Theorem 5.52. The persistence diagram 𝐷(𝑓) is well-defined; that is,
it is independent of both the chosen gradient vector field 𝑉 consistent
with 𝑓 and the total order ≺ consistent with 𝑓 and 𝑉.
Proof. We need to show that the information recorded in the definition
of the persistence diagram is the same regardless of the choice of 𝑉 or of
≺, i.e., that we obtain the same values for the persistence pairs, essential
simplices, and points on the diagonal. By definition, 𝐷(𝑓) must always
contain each point on the diagonal with countably infinite multiplicity,
regardless of the chosen 𝑓.
Fix a total order ≺ consistent with 𝑓, and suppose there are 𝑘 positive
𝑑-simplices 𝜎𝑖 with non-zero persistence and 𝑓(𝜎𝑖 ) = 𝑠. We show that
for any other total order ≺′ consistent with 𝑓, we must have 𝑘 positive 𝑑-
simplices 𝜎𝑖 with non-zero persistence and 𝑓(𝜎𝑖 ) = 𝑠. To see this, define
𝑠_ ∶= max{𝑓(𝛼) ∶ 𝛼 ∈ 𝐾, 𝑓(𝛼) < 𝑠}. Then 𝛽𝑑 (𝐾(𝑠)) = 𝛽𝑑 (𝐾(𝑠_)) + 𝑘.
These Betti numbers, however, are independent of ≺, so for any other
146 5. Discrete Morse theory and persistent homology
total order ≺′ consistent with 𝑓, there must be 𝑘 positive 𝑑-simplices 𝜎𝑖
with non-zero persistence and 𝑓(𝜎𝑖 ) = 𝑠.
(𝑑) (𝑑+1)
Now suppose that there are 𝑘 persistence pairs (𝜎𝑖 , 𝜏𝑖 ) such
that (𝑓(𝜎𝑖 ), 𝑓(𝜏𝑖 )) = (𝑠, 𝑡). Let 𝛽𝑑𝑠,𝑡 be the 𝑑th persistent Betti numbers
of 𝐾(𝑠) in 𝐾(𝑡). There are exactly 𝛽𝑑𝑠,𝑡 − 𝛽 𝑠_,𝑡 classes born at 𝑠, and by
definition of a persistence pair, the rank of the subvector space of classes
born at 𝑠 decreases at 𝑡 by 𝑘 from the next smaller value 𝑡_ = max{𝑓(𝜙) ∶
𝜙 ∈ 𝐾, 𝑓(𝜙) < 𝑡}. Hence we have that
𝛽𝑑𝑠,𝑡 − 𝛽𝑑𝑠_,𝑡 = (𝛽𝑑𝑠,𝑡 − 𝛽𝑑𝑠_,𝑡_ ) − 𝑘.
The persistence Betti numbers are independent of the total order ≺,
so the persistence diagram depends only on 𝑓. □
We now define a way to measure how “far apart” or different two
persistence diagrams are from each other. If 𝑎 ∈ 𝑋 has multiplicity
𝑚(𝑎), we view this as 𝑎1 , 𝑎2 , … , 𝑎𝑚(𝑎) . These terms will be referred to as
individual elements.
2
Definition 5.53. Let 𝑋 and 𝑌 be two multisets of ℝ . The bottleneck
distance is 𝑑𝐵 (𝑋, 𝑌 ) ∶= inf sup ‖𝑥 − 𝛾(𝑥)‖∞ where 𝛾 ranges over all
𝛾 𝑥∈𝑋
bijections from the individual elements of 𝑋 to the individual elements
of 𝑌 .
By convention, we define (𝑎, ∞)−(𝑏, ∞) = (𝑎−𝑏, 0), (𝑎, ∞)−(𝑏, 𝑐) =
(𝑎 − 𝑏, ∞), and ‖(𝑎, ∞)‖∞ = ∞ for all 𝑎, 𝑏, 𝑐 ∈ ℝ.
The bottleneck distance provides another measure of how far apart
two pseudo-discrete Morse functions are. In this case, it measures how
far away their corresponding diagrams are. One nice property of the bot-
tleneck distance is that if two pseudo-discrete Morse functions are close,
then their corresponding diagrams are close. This is the so-called sta-
bility theorem below. It is called a stability theorem because a slight
change in a pseudo-discrete Morse function will not result in a wildly
different persistence diagram; in other words, a slight change in the
pseudo-discrete Morse function results in a slight change in the persis-
tence diagram. In that sense, the bottleneck distance is stable. First we
present a lemma.
5.2. Persistent homology of DMF 147
Lemma 5.54. Let 𝑓 and 𝑔 be two flat pseudo-discrete Morse functions
consistent with gradient vector fields 𝑉𝑓 and 𝑉𝑔 , respectively. Then 𝑓𝑡 ∶=
(1 − 𝑡)𝑓 + 𝑡𝑔 is a flat pseudo-discrete Morse function consistent with
gradient vector field 𝑉 ∶= 𝑉𝑓 ∩ 𝑉𝑔 .
Notice the similarity to Lemma 4.18.
Proof. We know by Lemma 5.31 that 𝑓𝑡 is a pseudo-discrete Morse func-
tion. It remains to show that it is consistent with the gradient vector field
𝑉 = 𝑉𝑓 ∩ 𝑉𝑔 . Since 𝑓 and 𝑔 are flat, for any pair 𝜎(𝑝) < 𝜏(𝑝+1) we have
that 𝑓(𝜎) ≤ 𝑓(𝜏) and 𝑔(𝜎) ≤ 𝑔(𝜏). The same argument used for Lemma
4.18 then shows that 𝑓𝑡 is consistent with 𝑉. □
We are now able to prove the stability theorem.
Theorem 5.55 (Stability theorem). Let 𝑓, 𝑔 ∶ 𝐾 → ℝ be two pseudo-
discrete Morse functions. Then 𝑑𝐵 (𝐷(𝑓), 𝐷(𝑔)) ≤ ‖𝑓 − 𝑔‖∞ .
Proof. Without loss of generality, it suffices to prove the result in the
case where 𝑓 and 𝑔 are flat pseudo-discrete Morse functions (see Prob-
lem 5.56). Consider the family 𝑓𝑡 ∶= (1 − 𝑡)𝑓 + 𝑡𝑔 for 𝑡 ∈ [0, 1]. By
Lemma 5.54, 𝑓𝑡 is a flat pseudo-discrete Morse function. However, the
total orders ≺𝑓𝑡 may be different for different 𝑡. We will partition [0, 1]
into subintervals and find a total order that is consistent on a fixed inter-
val. To that end, let 𝜎(𝑝) < 𝜏(𝑝+1) with 𝑓(𝜎) − 𝑔(𝜎) ≠ 𝑓(𝜏) − 𝑔(𝜏). Then
there exists a unique 𝑡 such that 𝑓𝑡 (𝜎) = 𝑓𝑡 (𝜏). This value is given by
𝑓(𝜎) − 𝑓(𝜏)
𝑡𝜍,𝜏 ∶= .
𝑓(𝜎) − 𝑓(𝜏) − 𝑔(𝜎) + 𝑔(𝜏)
If 𝑓(𝜎) − 𝑔(𝜎) = 𝑓(𝜏) − 𝑔(𝜏), then 𝑓𝑡 (𝜎) = 𝑓𝑡 (𝜏) for all 𝑡 if and only
if 𝑓(𝜎) = 𝑓(𝜏). Hence the order ≺𝑓𝑡 only changes at the values 𝑡𝜍,𝜏 .
Since there are only finitely many of these and the total ordering remains
constant in between these 𝑡𝜍,𝜏 , we obtain the desired partition along with
the total order ≺𝑖 on 𝐾 that is consistent with 𝑓𝑡 for all 𝑡 ∈ [𝑡𝑖 , 𝑡𝑖+1 ]. Since
these all have the same total order, they all have the same gradient vector
field so that, in particular, all such 𝑓𝑡 in the interval [𝑡𝑖 , 𝑡𝑖+1 ] have the
same persistence pairs 𝑃𝑖 . Hence there is a natural correspondence of
points between each of the persistence diagrams 𝐷(𝑓𝑡 ). If 𝜎 is essential,
148 5. Discrete Morse theory and persistent homology
write (𝜎, ∞) ∈ 𝑃𝑖 and 𝑓𝑡 (∞) = ∞. Let 𝑟, 𝑠 ∈ [𝑡𝑖 , 𝑡𝑖+1 ]. Because we have
identified persistence pairs, the bottleneck distance satisfies
𝑑𝐵 (𝐷(𝑓𝑟 ), 𝐷(𝑓𝑠 )) ≤ max ‖(𝑓𝑟 (𝜎), 𝑓𝑟 (𝜏)) − (𝑓𝑠 (𝜎), 𝑓𝑠 (𝜏))‖
(𝜍,𝜏)∈𝑃𝑖
≤ ‖𝑓𝑟 − 𝑓𝑠 ‖∞ = |𝑠 − 𝑟|‖𝑓 − 𝑔‖∞ ,
where the last equality was derived in Example 5.24. Using this fact,
we then sum over the partition and use the triangle inequality for the
bottleneck distance to obtain
𝑘−1
𝑑𝐵 (𝐷(𝑓), 𝐷(𝑔)) ≤ ∑ 𝑑𝐵 (𝐷(𝑓𝑡𝑖 ), 𝐷(𝑓𝑡𝑖+1 ))
𝑖=0
𝑘−1
≤ ∑ (𝑡𝑖+1 − 𝑡𝑖 )‖𝑓 − 𝑔‖∞
𝑖=0
= ‖𝑓 − 𝑔‖∞ ,
which is the desired result. □
Problem 5.56. Prove that it is sufficient to consider flat pseudo-discrete
Morse functions in the proof of Theorem 5.55.
Exercise 5.57. Verify that the value 𝑡𝜍,𝜏 given in the proof of Theorem
5.55 is the correct value and is unique.
Problem 5.58. Let 𝐾 be a complex. For any 𝜖 > 0, find two persistence
diagrams 𝐷(𝑓) and 𝐷(𝑔) on 𝐾 such that 0 < 𝑑𝐵 (𝐷(𝑓), 𝐷(𝑔)) < 𝜖.
Chapter 6
Boolean functions and
evasiveness
This chapter is devoted to a fascinating application to a computer sci-
ence search problem involving “evasiveness,” originally due to Forman
[67]. Along the way, we will introduce Boolean functions and see how
they relate to simplicial complexes and the evasiveness problem. For an-
other interesting use of discrete Morse theory in the type of application
discussed in this chapter, see [59].
6.1. A Boolean function game
Let’s play a game1 . For any integer 𝑛 ≥ 0, we’ll pick a function 𝑓 whose
input is an (𝑛 + 1)-tuple of 0s and 1s, usually written in vector notation
as 𝑥.⃗ The output of 𝑓 is either 0 or 1. In other words, we are looking at
a function of the form 𝑓 ∶ {0, 1}𝑛+1 → {0, 1} with 𝑓(𝑥)⃗ = 0 or 1. Such a
function is called a Boolean function. The hider creates a strategy for
creating an input 𝑥⃗ ∶= (𝑥0 , 𝑥1 , … , 𝑥𝑛 ). The seeker does not know the
hider’s input, and has to guess the output by asking the hider to reveal
any particular input value (i.e., entry of the input vector). The seeker
may only ask yes or no questions of the form “Is 𝑥𝑖 = 0?” or “Is 𝑥𝑖 = 1?”
The chosen Boolean function 𝑓 is known to both players. The object
1
Not the kind that Jigsaw plays, thank goodness.
149
150 6. Boolean functions and evasiveness
of the game is for the seeker to guess the output in as few questions as
possible. The seeker wins if she can correctly predict the output without
having to reveal every single input value. The seeker is said to have a
winning strategy if she can always predict the correct output without
revealing every input value, regardless of the strategy of the hider. The
hider wins if the seeker needs to ask to reveal all the entries of the input
vector.
In the following examples and exercises, it may help to actually
“play” this game with a partner.
Example 6.1. Let 𝑛 = 99 and let 𝑓 ∶ {0, 1}100 → {0, 1} be defined by
𝑓(𝑥)⃗ = 0 for all 𝑥⃗ ∈ {0, 1}100 . Then the seeker can always win this game
in zero questions; that is, it does not matter what strategy the hider picks
for creating an input since the output is always 0.
Example 6.2. Let 𝑃34 ∶ {0, 1}4+1 → {0, 1} by 𝑃34 (𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ) = 𝑥3 .
The seeker can always win in one question by asking “Is 𝑥3 = 1?” In
general, we define the projection function 𝑃𝑖𝑛 ∶ {0, 1}𝑛+1 → {0, 1} by
𝑃𝑖𝑛 (𝑥)⃗ = 𝑥𝑖 for 0 ≤ 𝑖 ≤ 𝑛.
Exercise 6.3. Prove that the seeker can always win in one question for
the Boolean function 𝑃𝑖𝑛 .
Note that in Example 6.2, it is true that the seeker could ask an un-
helpful question like “Is 𝑥4 = 1?” but we are trying to come up with the
best possible strategy.
𝑛 𝑛
Exercise 6.4. Let 𝑃𝑗,ℓ ∶ {0, 1}𝑛+1 → {0, 1} be defined by 𝑃𝑗,ℓ (𝑥0 , … , 𝑥𝑛 ) =
𝑥𝑗 + 𝑥ℓ mod 2 for 0 ≤ 𝑗 ≤ ℓ ≤ 𝑛. Prove that the seeker has a winning
strategy and determine the minimum number of questions the seeker
needs to asks in order to win.
Although we can think of the above as simply a game, it is much
more important than that. Boolean functions are used extensively in
computer science, logic gates, cryptography, social choice theory, and
many other applications. See for example [131], [92], and [154]. You
can imagine that if a computer has to compute millions of outputs of
Boolean functions, tremendous savings in time and memory are possi-
ble if the computer can compute the output by processing only minimal
6.1. A Boolean function game 151
input. As a silly but illustrative example, consider the Boolean function
in Example 6.1. This Boolean function takes in a string of one hundred 0s
and 1s and will always output a 0. If a computer program had to compute
this function for 200 different inputs, it would be horribly inefficient for
the computer to grab each of the 200 inputs from memory and then do
the computation, because who cares what the inputs are? We know the
output is always 0. In a similar manner, if we had to compute 𝑃3999 on
1000 inputs, all we would need to do is call the third entry of each input,
not all 1000 entries of each input. Hence, we can possibly save time and
many resources by determining the minimum amount of information
needed in order to determine the output of a Boolean function.
On the other hand, a Boolean function that requires knowledge of
every single input value is going to give us some trouble. A Boolean
function 𝑓 ∶ {0, 1}𝑛+1 → {0, 1} for which there exists a strategy for the
hider to always win is called evasive. In other words, such a function
requires knowledge of the value of every single entry of the input in order
to determine the output.
Define 𝑇24 ∶ {0, 1}4+1 → {0, 1} by 𝑇24 (𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ) = 1 if there are
two or more 1s in the input and 0 otherwise. Let’s simulate one possible
playing of this game.
Seeker: Is 𝑥1 = 1?
Hider: Yes.
Seeker: Is 𝑥3 = 1?
Hider: Yes.
Seeker: Now I know the input has at least two 1s, so the output
is 1. I win!
While the seeker may have won this game, the hider did not use a
very good strategy. Letting 𝑥3 = 1 on the second question guaranteed
two 1s in the input, so the output was determined. Rather, the hider
should have had a strategy that did not reveal whether or not the input
had two 1s until the very end. Let’s try again.
Seeker: Is 𝑥1 = 1?
Hider: Yes.
152 6. Boolean functions and evasiveness
Seeker: Is 𝑥3 = 1?
Hider: No.
Seeker: Is 𝑥2 = 1?
Hider: No.
Seeker: Is 𝑥0 = 1?
Hider: No.
Seeker: Is 𝑥4 = 1?
Hider: Yes.
Seeker: Now I know that the output is 1, but I needed the whole
input. I lose.
Hence, the hider can always win if the Boolean function is 𝑇24 . In
other words, 𝑇24 is evasive.
Problem 6.5. The threshold function 𝑇𝑖𝑛 = 𝑇𝑖 ∶ {0, 1}𝑛+1 → {0, 1} is
defined to be 1 if there are 𝑖 or more 1s in the input and 0 otherwise.
Prove that the threshold function is evasive.
6.2. Simplicial complexes are Boolean functions
Although our game may seem interesting, what does it have to do with
simplicial complexes? There is a natural way to associate a simplicial
complex to a Boolean function that satisfies an additional property called
monotonicity.
Definition 6.6. Let 𝑓 ∶ {0, 1}𝑛+1 → {0, 1} be a Boolean function. Then
𝑓 is called monotone if whenever 𝑥⃗ ≤ 𝑦,⃗ we have 𝑓(𝑥)⃗ ≤ 𝑓(𝑦).
⃗
Exercise 6.7. Prove that the two constant Boolean functions 0(𝑥)⃗ = 0
and 1(𝑥)⃗ = 1 are monotone.
Exercise 6.8. Prove that the threshold function 𝑇𝑖𝑛 ∶ {0, 1}𝑛+1 → {0, 1}
defined in Problem 6.5 is monotone.
As we will see, the condition of being monotone will ensure that the
following construction is closed under taking subsets. The construction
is attributed to Saks et al. [95].
6.2. Complexes are Boolean 153
Definition 6.9. Let 𝑓 be a monotone Boolean function. The simplicial
complex Γ𝑓 induced by 𝑓 is the set of all sets of coordinates (excluding
(1, 1, … , 1)) such that if 𝑥⃗ ∈ {0, 1}𝑛+1 is 0 exactly on these coordinates,
then 𝑓(𝑥)⃗ = 1. We adopt the convention that if 𝑥𝑖0 = ⋯ = 𝑥𝑖𝑘 = 0 are
the 0 coordinates of an input, then the corresponding simplex in Γ𝑓 is
denoted by 𝜎𝑥⃗ ∶= {𝑥𝑖0 , … , 𝑥𝑖𝑘 }.
Example 6.10. You proved in Exercise 6.8 that the threshold function
is monotone. Hence, it corresponds to some simplicial complex. Let us
investigate 𝑇23 = 𝑇2 . A simplex in Γ𝑇2 corresponds to the 0 coordinates
of a vector 𝑥⃗ such that 𝑇2 (𝑥)⃗ = 1. We thus need to find all 𝑥⃗ such that
𝑇2 (𝑥)⃗ = 1. For example, 𝑇2 ((1, 0, 0, 1)) = 1. This has a 0 in 𝑥1 and 𝑥2 ,
so this vector corresponds to the simplex 𝑥1 𝑥2 . Also, 𝑇2 ((0, 1, 1, 1)) =
1, which corresponds to the simplex 𝑥0 . Finding all such simplices, 𝑇2
produces the simplicial complex
𝑥2 𝑥1
𝑥3
𝑥4
Exercise 6.11. Why do we exclude vector (1, 1, … , 1) in Definition 6.9?
As mentioned above, we need monotonicity to ensure that Γ𝑓 is a
simplicial complex.
Problem 6.12. Show that the Boolean function 𝑓 ∶ {0, 1}𝑛+1 → {0, 1}
defined by 𝑓(𝑥)⃗ = 𝑥0 +𝑥1 +⋯+𝑥𝑛 mod 2 is not monotone. Then attempt
to construct the corresponding Γ𝑓 and show why it is not a simplicial
complex.
154 6. Boolean functions and evasiveness
Exercise 6.13. Compute the simplicial complexes Γ0 and Γ1 induced by
the constant functions defined in Exercise 6.7.
Problem 6.14. Show that the projection 𝑃𝑖𝑛 ∶ {0, 1}𝑛+1 → {0, 1} defined
by 𝑃𝑖𝑛 (𝑥)⃗ = 𝑥𝑖 is monotone, and then compute Γ𝑃𝑖𝑛 .
Problem 6.15. Find a Boolean function 𝑓 such that Γ𝑓 is the simplicial
complex below:
𝑥1
𝑥2
𝑥0
𝑥3
We now show that monotone Boolean functions and simplicial
complexes are in bijective correspondence. In other words, a monotone
Boolean function and a simplicial complex are two sides of the same
coin.
Proposition 6.16. Let 𝑛 ≥ 0 be a fixed integer. Then there is a bijection
between monotone Boolean functions 𝑓 ∶ {0, 1}𝑛+1 → {0, 1} and simpli-
cial complexes on [𝑛].
Proof. Let 𝑓 ∶ {0, 1}𝑛+1 → {0, 1} be a monotone Boolean function. If
𝑓(𝑥)⃗ = 1 with 0 coordinates 𝑥𝑖0 , … , 𝑥𝑖𝑘 , write 𝜎𝑥⃗ ∶= {𝑥𝑖0 , … , 𝑥𝑖𝑘 } and let
Γ𝑓 be the collection of all such 𝜎𝑥⃗ . To see that Γ𝑓 is a simplicial complex,
let 𝜎𝑥⃗ ∈ Γ𝑓 and suppose 𝜎𝑦⃗ ∶= {𝑦𝑗0 , 𝑦𝑗1 , … , 𝑦𝑗ℓ } is a subset of 𝜎𝑥⃗ . Define
𝑦 ⃗ to be 0 exactly on the coordinates of 𝜎𝑦⃗. We need to show that 𝑓(𝑦)⃗ =
1. Suppose by contradiction that 𝑓(𝑦)⃗ = 0. Since 𝜎𝑥⃗ ∈ Γ𝑓 , 𝑓(𝑥)⃗ = 1.
Observe that since 𝜎𝑦⃗ ⊆ 𝜎𝑥⃗ , 𝑦 ⃗ can be obtained from 𝑥⃗ by switching all
the coordinates in 𝜎𝑥⃗ − 𝜎𝑦⃗ from 0 to 1. But then if 𝑓(𝑦)⃗ = 0, we have
switched our inputs from 0 to 1 while our output has switched from 1 to
0, contradicting the assumption that 𝑓 is monotone. Thus 𝑓(𝑦)⃗ = 1 and
Γ𝑓 is a simplicial complex.
6.3. Quantifying evasiveness 155
Now let 𝐾 be a simplicial complex on [𝑛]. Let 𝜎 ∈ 𝐾 with 𝜎 =
{𝑥𝑖0 , … , 𝑥𝑖𝑘 }. Define 𝑥𝜍⃗ ∈ {0, 1}𝑛+1 to be 0 on coordinates 𝑥𝑖0 , … , 𝑥𝑖𝑘 and
1 on all other coordinates. Define 𝑓 ∶ {0, 1}𝑛+1 → {0, 1} by 𝑓(𝑥𝜍⃗ ) = 1 for
all 𝜎 ∈ 𝐾 and 0 otherwise. To see that 𝑓 is monotone, let 𝑓(𝑥𝜍⃗ ) = 1 with
0 coordinates 𝑥0 , … , 𝑥𝑘 and suppose 𝑥′⃗ has 0 coordinates on a subset 𝑆
of {𝑥0 , … , 𝑥𝑘 }. Since 𝐾 is a simplicial complex, 𝑆 ∈ 𝐾. Hence the vector
with 0 coordinates 𝑆 has output 1. But this vector is precisely 𝑥′⃗ .
It is clear that these constructions are inverses of each other, hence
the result.
□
Problem 6.17. Let 𝑇𝑖𝑛 ∶ {0, 1}𝑛+1 → {0, 1} be the threshold function.
Compute Γ𝑇𝑖𝑛 .
6.3. Quantifying evasiveness
By Proposition 6.16, there is a bijective correspondence between mono-
tone Boolean functions and simplicial complexes. Hence any concept
we can define for a monotone Boolean function can be “imported” to
a simplicial complex. In particular, we can define what it means for a
simplicial complex to be evasive. Furthermore, we will generalize the
notion of evasiveness by associating a number to the complex, quantify-
ing “how close to being evasive is it?”
Let Δ𝑛 be the 𝑛-simplex on vertices 𝑣0 , 𝑣1 , … , 𝑣𝑛 . Suppose 𝑀 ⊆ Δ𝑛 is
a subcomplex known to both players. Let 𝜎 ∈ Δ𝑛 be a simplex known
only to the hider. The goal of the seeker is to determine whether or not
𝜎 ∈ 𝑀 using as few questions as possible. The seeker is permitted ques-
tions of the form “Is vertex 𝑣𝑖 in 𝜎?” The hider is allowed to use previous
questions to help determine whether to answer “yes” or “no” to the next
question. The seeker uses an algorithm 𝐴 based on all previous answers
to determine which vertex to ask about. Any such algorithm 𝐴 is called
a decision tree algorithm.
Example 6.18. Let us see how this game played with simplicial com-
plexes is the same game played with Boolean functions by starting with
the Boolean function from Example 6.1, defined by 𝑓 ∶ {0, 1}99+1 → {0, 1}
where 𝑓(𝑥)⃗ = 0 for all 𝑥⃗ ∈ {0, 1}99+1 . Then 𝑛 = 99 so that we have Δ99 .
156 6. Boolean functions and evasiveness
The subcomplex 𝑀 known to both players is then the simplicial complex
induced by the Boolean function, which in this case is again Δ99 . Now
think about this. It makes no difference at all what hidden face 𝜎 ∈ Δ99
is. Since 𝜎 ∈ Δ99 and 𝑀 = Δ99 , 𝜎 is most certainly in 𝑀. So the seeker
will immediately win without asking any questions, just as in Example
6.1.
Example 6.19. Now we will illustrate with the threshold function 𝑇23 , a
more interesting example. In this case, Δ𝑛 = Δ3 and 𝑀 ⊆ Δ3 is the sim-
plicial complex from Example 6.10, the complete graph on four vertices,
denoted by 𝐾4 , or the 1-skeleton of Δ3 . We can consider the same mock
versions of the game as in the dialogue preceding Problem 6.5.
Seeker: Is 𝑥1 ∈ 𝜎?
Hider: No.
Seeker: Is 𝑥3 ∈ 𝜎?
Hider: No.
Seeker: Now I know that 𝜎 ∈ 𝑀 because everything that is left
over is contained in the 1-skeleton, which is precisely 𝑀.
Again, as before, this illustrates a poor choice of strategy by the hider
(following the algorithm “always say no”2 ). Let’s show a better strategy
for the hider.
Seeker: Is 𝑥1 ∈ 𝜎?
Hider: No.
Seeker: Is 𝑥3 ∈ 𝜎?
Hider: Yes.
Seeker: Is 𝑥2 ∈ 𝜎?
Hider: Yes.
Seeker: Is 𝑥0 ∈ 𝜎?
Hider: No.
Seeker: Now I know that 𝜎 = 𝑥2 𝑥3 ∈ 𝑀, but it is too late. I needed
the whole input.
2
Much to the displeasure of Joe Gallian.
6.3. Quantifying evasiveness 157
Notice that the hider keeps the seeker guessing until the very end.
If you follow along, you realize that everything depends on the answer
to the question “Is 𝑥0 ∈ 𝜎?”
Denote by 𝑄(𝜎, 𝐴, 𝑀) the number of questions the seeker asks to
determine if 𝜎 is in 𝑀 using algorithm 𝐴. The complexity of 𝑀, denoted
by 𝑐(𝑀), is defined by
𝑐(𝑀) ∶= min max 𝑄(𝜎, 𝐴, 𝑀).
𝐴 𝜍
If 𝑐(𝑀) = 𝑛 + 1 for a particular 𝑀, then 𝑀 is called evasive, and it is
called nonevasive otherwise. For the examples above, we saw that Δ99 is
nonevasive while 𝐾4 is evasive. Of course, this language is intentionally
consistent with the corresponding Boolean function language. Further-
more, notice how evasiveness is just the special case where 𝑐(𝑀) = 𝑛+1,
while for nonevasive complexes 𝑐(𝑀) can have any value from 0 (ex-
tremely nonevasive) to 𝑛 (as close to being evasive as possible without
actually being evasive).
Any 𝜎 such that 𝑄(𝜎, 𝐴, 𝑀) = 𝑛 + 1 is called an evader of 𝐴. In
Example 6.19, we saw that the hider had chosen 𝜎 = 𝑥2 𝑥3 , so 𝑥2 𝑥3 is an
evader. However, the hider could have just as easily answered “yes” to
the seeker’s last question, and in that case 𝜎 = 𝑥2 𝑥3 𝑥0 would have been
an evader. Hence, evaders come in pairs in the sense that by the time the
seeker gets down to question 𝑛 + 1, she is trying to distinguish between
two simplices 𝜎1 and 𝜎2 . If 𝜎1 < 𝜎2 with dim(𝜎1 ) = 𝑝, we call 𝑝 the index
of the evader pair {𝜎1 , 𝜎2 }.
Exercise 6.20. Show that for a pair of evaders {𝜎1 , 𝜎2 }, we have 𝜎1 < 𝜎2
(without loss of generality), dim(𝜎2 ) = dim(𝜎1 ) + 1, and that 𝜎1 ∈ 𝑀
while 𝜎2 ∉ 𝑀. What does this remind you of?3
3
I suppose in theory this could remind you of anything, from Sabellianism to pop tarts, but try to
give a coherent answer.
158 6. Boolean functions and evasiveness
6.4. Discrete Morse theory and evasiveness
You may have observed in Exercise 6.20 that a pair of evaders is a lot like
an elementary collapse. Yet while a collapse represents something sim-
ple, a pair of evaders represents something complex, almost like a critical
simplex. We will use discrete Morse theory to prove the following:
Theorem 6.21. Let 𝐴 be a decision tree algorithm and let 𝑒𝑝 (𝐴) de-
note the number of pairs of evaders of 𝐴 having index 𝑝. For each 𝑝 =
0, 1, 2, … , 𝑛 − 1, we have 𝑒𝑝 (𝐴) ≥ 𝑏𝑝 (𝑀) where 𝑏𝑝 (𝑀) denotes the 𝑝th
Betti number of 𝑀.
In order to prove Theorem 6.21, we need to carefully investigate the
relationship between the decision tree algorithm 𝐴 chosen by the seeker
and discrete Morse theory. To do this, let us explicitly write down a pos-
sible decision tree algorithm 𝐴 that the seeker could execute in the case
of our running Example 6.19. A node 𝑥𝑖 in the tree below is shorthand
for the question “Is vertex 𝑥𝑖 ∈ 𝜎?”
𝑥1
Y N
𝑥3 𝑥2
Y N Y N
𝑥2 𝑥0 𝑥3 𝑥0
Y N Y N Y N Y N
𝑥0 𝑥0 𝑥2 𝑥2 𝑥0 𝑥0 𝑥3 𝑥3
Once the seeker is on the very last row, she knows whether or not
the previous vertices are in 𝜎 but cannot be sure if the current vertex is in
𝜎. For example, suppose the seeker finds herself on the third value from
the left, 𝑥2 . Working our way back up, the seeker knows that 𝑥0 𝑥1 ∈ 𝜎
but isn’t sure if 𝑥2 ∈ 𝜎. In other words, she cannot distinguish between
𝑥0 𝑥1 and 𝑥0 𝑥2 𝑥1 . The key to connecting this algorithm to discrete Morse
theory is to use the observation of Exercise 6.20 and view (𝑥0 𝑥1 , 𝑥0 𝑥2 𝑥1 )
as a vector of a gradient vector field. In fact, we can do this for all nodes
in the last row to obtain a gradient vector field on Δ3 . Actually, we have
to be a little careful. Notice that the rightmost node on the bottom row
6.4. Discrete Morse theory and evasiveness 159
corresponds to “vector” (∅, 𝑥3 ), which we do not consider part of a gradi-
ent vector field. But if we throw it out, we obtain a gradient vector field
that collapses Δ3 to the vertex 𝑥3 .
Problem 6.22. Write down explicitly a possible decision tree algorithm
for the Boolean projection function 𝑃34 ∶ {0, 1}4+1 → {0, 1}. (Note that
this requires you to translate this into a question of simplicial complexes.)
Compute the corresponding induced gradient vector field on Δ4 .
In general, any decision tree algorithm 𝐴 induces a gradient vector
field 𝑉𝐴 = 𝑉 on Δ𝑛 as follows: For each path in a decision tree, suppose
the seeker has asked 𝑛 questions and has determined that 𝛼 ⊆ 𝜎. The
(𝑛 + 1)st (and final) question is “Is 𝑣𝑖 ∈ 𝜎?” Let 𝛽 = 𝛼 ∪ {𝑣𝑖 }, and include
{𝛼, 𝛽} ∈ 𝑉 if and only if 𝛼 ≠ ∅. We saw in Section 5.2.2 both a partial and
a total order on the simplices of a simplicial complex. In the proof of the
following lemma, we will construct another total order on the simplices
of Δ𝑛 .
Lemma 6.23. Let 𝐴 be a decision tree algorithm. Then 𝑉 = 𝑉𝐴 is a
gradient vector field on Δ𝑛 .
Proof. By Theorem 2.51, it suffices to show that 𝑉 is a discrete vector
field with no closed 𝑉-paths. That 𝑉 is a discrete vector field is clear,
since each path in the decision tree algorithm corresponds to a unique
simplex.
To show that there are no closed 𝑉-paths, we put a total order ≺
on the simplices of Δ𝑛 and show that if 𝛼0 , 𝛽0 , 𝛼1 , 𝛽1 , … is a 𝑉-path, then
𝛼0 ≻ 𝛽0 ≻ 𝛼1 ≻ 𝛽1 ≻ ⋯. That this is sufficient is Exercise 6.24. Assign
to each edge labeled Y the depth of its sink node (lower node). For each
path from the root vertex to a leaf (vertex with a single edge), construct
a tuple whose entries are the values assigned to Y each time an edge
labeled Y is traversed. In this way, we associate to each simplex 𝛼(𝑝) a
tuple 𝑛(𝛼) ∶= (𝑛0 (𝛼), 𝑛1 (𝛼), … , 𝑛𝑝 (𝛼)) where 𝑛𝑖 (𝛼) is the value of the 𝑖th
Y answer and 𝑛0 (𝛼) < ⋯ < 𝑛𝑝 (𝛼). For any two simplices 𝛼(𝑝) and 𝛽(𝑞)
we define 𝛼 ≻ 𝛽 if there is a 𝑘 such that 𝑛𝑘 (𝛼) < 𝑛𝑘 (𝛽) and 𝑛𝑖 (𝛼) = 𝑛𝑖 (𝛽)
for all 𝑖 < 𝑘. If there is no such 𝑘 and 𝑞 > 𝑝, then 𝑛𝑖 (𝛼) = 𝑛𝑖 (𝛽) for all
0 ≤ 𝑖 ≤ 𝑝, so set 𝛼 ≻ 𝛽. This order is shown to be transitive in Problem
6.25.
160 6. Boolean functions and evasiveness
Now we show that the total order we defined above is preserved by a
(𝑝) (𝑝+1) (𝑝)
𝑉-path. To that end, suppose 𝛼0 , 𝛽0 , 𝛼1 is a segment in a 𝑉-path.
(𝑝) (𝑝+1) (𝑝)
We show that 𝛼0 ≻ 𝛽0 ≻ 𝛼1 . Since (𝛼0 , 𝛽0 ) ∈ 𝑉, we have that
𝛼0 ≠ 𝛼1 and 𝛼1 ⊆ 𝛽0 . Now 𝛼0 and 𝛽0 differ by only a single vertex, a
difference which was not determined until the very bottom row of the
decision tree. Hence, by definition of our total order, 𝑛𝑖 (𝛼) = 𝑛𝑖 (𝛽) for
all 1 ≤ 𝑖 ≤ 𝑝 while 𝑛𝑝+1 (𝛽) = 𝑛 + 1 and 𝑛𝑝+1 (𝛼0 ) is undefined. Thus
𝛼 ≻ 𝛽.
(𝑝+1) (𝑝)
It remains to show that 𝛽0 ≻ 𝛼1 . Write 𝛽0 = 𝑢0 𝑢1 ⋯ 𝑢𝑝 𝑢𝑝+1 .
For 𝜎 = 𝛼0 or 𝛽0 , we then have that the question 𝑛𝑖 (𝛽0 ) is “Is 𝑢𝑖 ∈ 𝜎?”
Since 𝛼1 ⊆ 𝛽0 , there is a 𝑘 such that 𝛼1 = 𝑢0 𝑢1 ⋯ 𝑢𝑘−1 𝑢𝑘+1 ⋯ 𝑢𝑝+1 .
Observe that the first 𝑛𝑘 (𝛽0 ) − 1 questions involve 𝑢0 , … , 𝑢𝑘−1 as well as
vertices not in 𝛽0 , which also means that they are not in 𝛼0 or 𝛼1 . Hence,
the first 𝑛𝑘 (𝛽0 ) − 1 answers will all be the same whether 𝜎 = 𝛼0 , 𝛼1 , or
𝛽0 ; i.e., the corresponding strings all agree until 𝑛𝑘 . For 𝑛𝑘 , the question
is “Is 𝑢𝑘 ∈ 𝜎?” If 𝜎 = 𝛽0 , the answer is “yes.” If 𝜎 = 𝛼1 , the answer
is “no.” Thus, the next value in the string for 𝛼1 will be larger than the
value just placed in 𝑛𝑘 (𝛽0 ). We conclude that 𝑛𝑖 (𝛼1 ) = 𝑛𝑖 (𝛽0 ) for 𝑖 < 𝑘
and 𝑛𝑘 (𝛼1 ) > 𝑛𝑘 (𝛽0 ), hence 𝛽0 ≻ 𝛼1 . □
It may aid understanding of the construction in Lemma 6.23 to draw
an example and explicitly compute some of these strings.
Problem 6.24. Suppose that there is a total order ≺ on Δ𝑛 such that if
𝛼0 , 𝛽0 , 𝛼1 , 𝛽1 , … is a 𝑉-path, then 𝛼0 ≻ 𝛽0 ≻ 𝛼1 ≻ 𝛽1 ≻ ⋯. Prove that 𝑉
does not contain any closed 𝑉-paths.
Problem 6.25. Prove that the order defined in Lemma 6.23 is transitive.
Exercise 6.26. Let 𝐴 be a decision tree algorithm and 𝑉 the correspond-
ing gradient vector field. How can you tell which vertex 𝑉 collapses onto
immediately from the decision tree 𝐴?
There is one more fairly involved result that we need to combine
with Lemma 6.23 in order to prove Theorem 6.21. The upshot is that
after this result, not only will we be able to prove the desired theorem,
but we will also get several other corollaries for free. We motivate this
result with an example.
6.4. Discrete Morse theory and evasiveness 161
Example 6.27. Let 𝑀 ⊆ Δ3 be given by
𝑣2
𝑣3
𝑣0
𝑣1
and consider the decision tree
𝑣0
Y N
𝑣1 𝑣2
Y N Y N
𝑣2 𝑣3 𝑣1 𝑣3
Y N Y N Y N Y N
𝑣3 𝑣3 𝑣2 𝑣2 𝑣3 𝑣3 𝑣1 𝑣1
We know from Lemma 6.23 that 𝐴 induces a gradient vector field
on Δ3 , which in this case is a collapse onto 𝑣1 . Now the pairs of evaders
are easily seen to be {𝑣0 𝑣1 , 𝑣0 𝑣1 𝑣3 }, {𝑣1 𝑣2 , 𝑣1 𝑣2 𝑣3 }, and {𝑣2 , 𝑣2 𝑣3 }, and by
definition one element in each pair lies in 𝑀 while the other lies out-
side of 𝑀. Hence, if we think of the evaders as critical elements, note
that we can start with 𝑣1 and perform elementary expansions and ad-
ditions of the three evaders in 𝑀 to obtain 𝑀. Furthermore, starting
from 𝑀, we can perform elementary expansions and attachments of the
three evaders not in 𝑀 to obtain Δ3 . What are these elementary expan-
sions? They are precisely the elements of the induced gradient vector
field which are not the evaders. In the decision tree above, these are
{𝑣0 𝑣1 𝑣2 , 𝑣0 𝑣1 𝑣2 𝑣3 }, {𝑣0 𝑣3 , 𝑣0 𝑣3 𝑣2 }, {𝑣0 , 𝑣0 𝑣2 }, and {𝑣3 , 𝑣3 𝑣1 }. Note that the
excluded {∅, 𝑣1 } tells us where to begin. Explicitly, the series of expan-
sions and attachments of evaders is given by starting with the single ver-
tex
𝑣1
162 6. Boolean functions and evasiveness
Next we attach the evader 𝑣2 :
𝑣2
𝑣1
Now we attach the evader 𝑣1 𝑣2 and perform the elementary expansion
{𝑣0 , 𝑣0 𝑣2 }:
𝑣2 𝑣2
↗
𝑣0
𝑣1 𝑣1
To obtain 𝑀, we attach the evader 𝑣0 𝑣1 and make the expansion {𝑣3 , 𝑣3 𝑣1 }:
𝑣2 𝑣2
↗
𝑣0 𝑣0
𝑣3
𝑣1 𝑣1
6.4. Discrete Morse theory and evasiveness 163
We continue to build from 𝑀 all the way up to Δ3 . This begins with the
attachment of the evader 𝑣2 𝑣3 followed by the expansion {𝑣0 𝑣3 , 𝑣0 𝑣3 𝑣2 }:
𝑣2 𝑣2
↗
𝑣0 𝑣0
𝑣3 𝑣3
𝑣1 𝑣1
Next we have the attachment of evader 𝑣1 𝑣3 𝑣2 :
𝑣2
𝑣0
𝑣3
𝑣1
And finally we have the attachment of 𝑣0 𝑣1 𝑣3 followed by the expansion
{𝑣0 𝑣1 𝑣2 , 𝑣0 𝑣1 𝑣2 𝑣3 }:
𝑣2 𝑣2
↗
𝑣0 𝑣0
𝑣3 𝑣3
𝑣1 𝑣1
This completes the construction of Δ3 from 𝐴.
164 6. Boolean functions and evasiveness
We now take the ideas from Example 6.27 and generalize to con-
struct Δ𝑛 out of expansions and attachments of evaders using the infor-
mation in a decision tree 𝐴.
Theorem 6.28. Let 𝐴 be a decision tree algorithm and 𝑘 the number of
pairs of evaders of 𝐴. If ∅ is not an evader of 𝐴, then there are evaders
𝜎11 , 𝜎12 , … , 𝜎1𝑘 of 𝐴 in 𝑀, and evaders 𝜎21 , 𝜎22 , … , 𝜎2𝑘 of 𝐴 not in 𝑀, along
with a nested sequence of subcomplexes of Δ𝑛 ,
𝑣 = 𝑀0 ⊆ 𝑀1 ⊆ ⋯ ⊆ 𝑀𝑘−1 ⊆ 𝑀𝑘 ⊆ 𝑀 = 𝑆0 ⊆ 𝑆1 ⊆ ⋯ ⊆ 𝑆𝑘 ⊆ Δ𝑛
where 𝑣 is a vertex of 𝑀 which is not an evader of 𝐴, such that
𝑣 ↗ 𝑀1 − 𝜎11
𝑀1 ↗ 𝑀2 − 𝜎12
𝑀2 ↗ 𝑀3 − 𝜎13
⋮
𝑀𝑘−1 ↗ 𝑀𝑘 − 𝜎1𝑘
𝑀𝑘 ↗ 𝑀 = 𝑆0
𝑆0 ↗ 𝑆1 − 𝜎21
⋮
𝑆𝑘−1 ↗ 𝑆𝑘 − 𝜎2𝑘
𝑆𝑘 ↗ Δ𝑛
If ∅ is an evader of 𝐴, the theorem holds if 𝜎11 = ∅ and 𝑀0 = 𝑀1 = ∅.
This requires 𝑀2 = 𝜎12 .
Proof. First suppose that ∅ is not an evader of 𝐴. We will construct a
discrete Morse function on Δ𝑛 whose critical simplices are the evaders
of 𝐴 and induced gradient vector field 𝑉𝐴 = 𝑉 minus the evader pairs.
Let 𝑊 ⊆ 𝑉 be the set of pairs (𝛼, 𝛽) ∈ 𝑉 such that either (𝛼, 𝛽) ∈ 𝑀 or
(𝛼, 𝛽) ∉ 𝑀. By Lemma 6.23, 𝑊 is a gradient vector field on Δ𝑛 . We now
determine the critical simplices of 𝑊. By construction, a pair (𝛼, 𝛽) of
simplices of 𝑉 is not in 𝑊 if and only if 𝛼 ∈ 𝑀 and 𝛽 ∉ 𝑀 or 𝛼 ∉ 𝑀 and
𝛽 ∈ 𝑀; i.e., all evaders of 𝐴 are critical simplices of 𝑊. Furthermore,
the vertex 𝑣, which is paired with ∅, is also critical. This vertex and the
6.4. Discrete Morse theory and evasiveness 165
evaders of 𝐴 form all the critical simplices of 𝑊. It remains to ensure
that the ordering of expansions and attachments given in the statement
of the theorem is respected.
To that end, let 𝑓 ∶ Δ𝑛 → ℝ be any discrete Morse function with
induced gradient vector field 𝑊. By definition, if 𝛼(𝑝) ∈ 𝑀 and 𝛾(𝑝+1) ∉
𝑀 with 𝛼 < 𝛾, then (𝛼, 𝛾) ∉ 𝑊 so that 𝑓(𝛾) > 𝑓(𝛼). Define
𝑎 ∶= sup 𝑓(𝛼),
𝛼∈𝑀
𝑏 ∶= inf 𝑓(𝛼),
𝛼∉𝑀
𝑐 ∶= 1 + 𝑎 − 𝑏,
𝑑 ∶= inf 𝑓(𝛼),
𝛼∈∆𝑛
and a new discrete Morse function
𝑔 ∶ Δ𝑛 → ℝ
by
⎧𝑓(𝛼) if 𝛼 ∈ 𝑀 − 𝑣,
𝑔(𝛼) ∶= 𝑓(𝛼) + 𝑐 if 𝛼 ∉ 𝑀,
⎨
⎩𝑑 − 1 if 𝛼 = 𝑣.
To see that 𝑔 is a discrete Morse function with the same critical sim-
plices as 𝑓, observe that for each 𝛼 ∈ 𝑀 and 𝛽 ∉ 𝑀, 𝑔(𝛽) ≥ 𝑐 + 1 >
𝑐 ≥ 𝑔(𝛼). For each pair 𝛼(𝑝) < 𝛽 (𝑝+1) , we have 𝑔(𝛽) > 𝑔(𝛼) if and only if
𝑓(𝛽) > 𝑓(𝛼). Then 𝑔 is a discrete Morse function with the same critical
simplices as 𝑓.
The case where ∅ is an evader of 𝐴 is similar. □
Exercise 6.29. Explain why the function 𝑓 in Theorem 6.28 may not be
the desired discrete Morse function. In other words, why was it neces-
sary to construct the function 𝑔?
The proof of Theorem 6.21 follows immediately from Theorem 4.1;
that is, for any decision tree 𝐴, 𝑒𝑖 (𝐾) ≥ 𝑏𝑖 (𝐾) where 𝑒𝑖 is the number of
pairs of evaders of index 𝑖. An immediate corollary is the following:
166 6. Boolean functions and evasiveness
Corollary 6.30. For any decision tree algorithm 𝐴, the number of pairs
𝑛
of evaders of 𝐴 is greater than or equal to ∑ 𝑏𝑖 .
𝑖=0
Another corollary of Theorem 6.28 is the following:
Theorem 6.31. If 𝑀 is nonevasive, then 𝑀 is collapsible.
Problem 6.32. Prove Theorem 6.31.
An interesting question is whether or not the converse of Theorem
6.31 is true. That is, is nonevasive the same thing as collapsible? The
following example answers this question in the negative. One concept
that will be helpful is that of the link of a vertex. We define it below and
will study it in more detail in Chapter 10.
Definition 6.33. Let 𝐾 be a simplicial complex and 𝑣 ∈ 𝐾 a vertex. The
star of 𝑣 in 𝐾, denoted by star𝐾 (𝑣), is the simplicial complex induced by
the set of all simplices of 𝐾 containing 𝑣. The link of 𝑣 in 𝐾 is the set
link𝐾 (𝑣) ∶= star𝐾 (𝑣) − {𝜎 ∈ 𝐾 ∶ 𝑣 ∈ 𝜎}.
The following lemma gives a sufficient condition for detecting eva-
siveness.
Lemma 6.34. If 𝑀 ⊆ Δ𝑛 is nonevasive, then there exists a vertex 𝑣 ∈ 𝑀
such that link∆𝑛 (𝑣) is nonevasive.
Example 6.35. Let 𝐶 be the simplicial complex below.
2 1
3
2 4 6 2
1 3
6.4. Discrete Morse theory and evasiveness 167
Note that there is only one free pair (namely, {13, 137}) and that
starting from this free pair, we may collapse 𝐶. Showing that 𝐶 is evasive
is Problem 6.36.
Problem 6.36. Show that 𝐶 in Example 6.35 is evasive.
Chapter 7
The Morse complex
The Morse complex was first introduced and studied in a paper by Chari
and Joswig [42] and was studied further by Ayala et al. [7], Capitelli and
Minian [39], and Lin et al. [110], among others. Like much of what we
have seen so far, the Morse complex has at least two equivalent defini-
tions. We will give these definitions of the Morse complex and leave it to
you in Problem 7.7 to show that they are equivalent. Because we will be
working up to Forman equivalence in this chapter, we will often identify
a discrete Morse function 𝑓 with its induced gradient vector field 𝑉𝑓 and
use the notations interchangeably.
7.1. Two definitions
Recall from Section 2.2.2 that given a discrete Morse function 𝑓 ∶ 𝐾 →
ℝ, we construct the induced gradient vector field (which is also a dis-
crete vector field) 𝑉𝑓 = 𝑉 and use this to form the directed Hasse di-
agram ℋ𝑉 . This is accomplished by drawing an upward arrow on the
edge connecting each regular pair while drawing a downward arrow on
all remaining edges. We then saw in Theorem 2.69 that the resulting di-
rected graph will contain no directed cycles. Any Hasse diagram drawn
in this way (or more generally a directed graph with no directed cycles)
we will call acyclic. Furthermore, we know from Lemma 2.24 that the
set of upward-pointing arrows forms a matching on the set of edges of
169
170 7. The Morse complex
ℋ𝑉 , that is, a set of edges without common nodes. Hence, to use our
new jargon, a discrete Morse function always yields an acyclic match-
ing of the induced directed Hasse diagram. An acyclic matching of the
directed Hasse diagram is called a discrete Morse matching.
Example 7.1. To remind yourself of this construction, let 𝑓 ∶ 𝐾 → ℝ be
the discrete Morse function from Example 2.88 given by
4 0 5
3
12 9
4 6 8 7
13
3 3 2 5 6 10 1
Giving names to the vertices, the induced gradient vector field 𝑉𝑓 is
given by
𝑣5 𝑣6 𝑣7
𝑣1 𝑣2 𝑣3 𝑣4
7.1. Two definitions 171
while the directed Hasse diagram ℋ𝑉 is as shown below (note that the
downward arrows are not drawn in order to avoid cluttering the picture).
𝑣3 𝑣4 𝑣7
𝑣1 𝑣2 𝑣2 𝑣3 𝑣2 𝑣5 𝑣1 𝑣5 𝑣2 𝑣6 𝑣3 𝑣6 𝑣6 𝑣7
𝑣3 𝑣4 𝑣3 𝑣7 𝑣4 𝑣7
𝑣1 𝑣2 𝑣3 𝑣4 𝑣5 𝑣6 𝑣7
If 𝑓 ∶ 𝐾 → ℝ is a discrete Morse function, we will sometimes use
ℋ𝑓 to denote the directed Hasse diagram induced by 𝑉𝑓 . Because we are
primarily interested in ℋ𝑓 in this section, we define two discrete Morse
functions 𝑓, 𝑔 ∶ 𝐾 → ℝ to be Hasse equivalent if ℋ𝑓 = ℋ𝑔 . Recall
from Corollary 2.70 that 𝑓 and 𝑔 are Hasse equivalent if and only if they
are Forman equivalent.
It is not difficult to see that one could include more edges in the dis-
crete Morse matching in Example 7.1. Moreover, there are many differ-
ent discrete Morse matchings one could put on the Hasse diagram, rang-
ing from the empty discrete Morse matching (no edges) to a “maximal”
discrete Morse matching consisting of as many edges as possible without
a cycle. It turns out that the collection of all discrete Morse matchings
has the added structure of itself being a simplicial complex.
Definition 7.2. Let 𝐾 be a simplicial complex. The Morse complex
of 𝐾, denoted by ℳ(𝐾), is the simplicial complex on the set of edges of
ℋ(𝐾) defined as the set of subsets of edges of ℋ(𝐾) which form discrete
Morse matchings (i.e., acyclic matchings), excluding the empty discrete
Morse matching.
172 7. The Morse complex
Example 7.3. Let 𝐺 be the simplicial complex given by
𝑎 𝑐
𝑏
Its Hasse diagram ℋ𝐺 = ℋ is
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
We wish to find all discrete Morse matchings on ℋ. Four such
matchings are given below.
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
7.1. Two definitions 173
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
Call these matchings 𝑏𝑑, 𝑐𝑏, 𝑏𝑐, and 𝑎𝑏, respectively. Then these four
matchings correspond to four vertices in the Morse complex. Higher-
dimensional simplices then correspond to more arrows on the Hasse di-
agram. For example, there is an edge between 𝑏𝑑 and 𝑎𝑏 in the Morse
complex since
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
is a discrete Morse matching. There is not, however, an edge between 𝑏𝑐
and 𝑏𝑑 in the Morse complex since
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
174 7. The Morse complex
is not a discrete Morse matching. We obtain a 2-simplex in the Morse
complex by observing that
𝑎𝑏 𝑏𝑐 𝑏𝑑
𝑎 𝑏 𝑐 𝑑
is a discrete Morse matching. Continuing in this way, one constructs the
Morse complex ℳ(𝐺).
Problem 7.4. Compute the Morse complex of the simplicial complex
given in Example 7.3.
We now give an alternative definition of the Morse complex. This
definition builds the Morse complex out of gradient vector fields from
the ground up. If a discrete Morse function 𝑓 has only one regular pair,
we call 𝑓 primitive. Given multiple primitive discrete Morse functions,
we wish to combine them to form a new discrete Morse function. This
will be accomplished by viewing each discrete Morse function as a gra-
dient vector field and “overlaying” all the arrows. Clearly such a con-
struction may or may not yield a gradient vector field.
Example 7.5. Let primitive gradient vector fields 𝑓0 , 𝑓1 , and 𝑓2 be given
by
respectively. Then 𝑓0 and 𝑓1 combine to form a new gradient vector
field 𝑓:
7.1. Two definitions 175
But clearly combining 𝑓1 and 𝑓2 gives
which does not yield a gradient vector field.
If 𝑓, 𝑔 ∶ 𝐾 → ℝ are two discrete Morse functions, write 𝑔 ≤ 𝑓 when-
ever the regular pairs of 𝑔 are also regular pairs of 𝑓. In general, we
say that a collection of primitive discrete Morse functions 𝑓0 , 𝑓1 , … , 𝑓𝑛 is
compatible if there exists a discrete Morse function 𝑓 such that 𝑓𝑖 ≤ 𝑓
for all 0 ≤ 𝑖 ≤ 𝑛.
Definition 7.6. The Morse complex of 𝐾, denoted by ℳ(𝐾), is the
simplicial complex whose vertices are given by primitive discrete Morse
functions and whose 𝑛-simplices are given by gradient vector fields with
𝑛 + 1 regular pairs. A gradient vector field 𝑓 is then associated with
all primitive gradient vector fields 𝑓 ∶= {𝑓0 , … , 𝑓𝑛 } with 𝑓𝑖 ≤ 𝑓 for all
0 ≤ 𝑖 ≤ 𝑛.
Problem 7.7. Prove that Definitions 7.2 and 7.6 are equivalent.
Example 7.8. We construct the Morse complex for the graph given by
𝑢 𝑣 𝑤
There are four primitive pairs, namely, (𝑢, 𝑢𝑣), (𝑤, 𝑣𝑤), (𝑣, 𝑢𝑣), and
(𝑣, 𝑣𝑤). These pairs correspond to four vertices in the Morse complex.
The only primitive vectors that are compatible are 𝑉1 = {(𝑢, 𝑢𝑣), (𝑣, 𝑣𝑤)},
𝑉2 = {(𝑤, 𝑣𝑤), (𝑣, 𝑢𝑣)}, and 𝑉3 = {(𝑢, 𝑢𝑣), (𝑤, 𝑣𝑤)}. Hence the Morse
complex is given by
𝑉3
(𝑢, 𝑢𝑣) (𝑤, 𝑣𝑤)
𝑉1 𝑉2
(𝑣, 𝑣𝑤) (𝑣, 𝑢𝑣)
176 7. The Morse complex
Exercise 7.9. Construct the Morse complex for the graph
Problem 7.10. Let 𝐾 be a 1-dimensional, connected simplicial complex
with 𝑒 1-simplices and 𝑣 0-simplices. Prove that dim(ℳ(𝐺)) = 𝑣 − 2.
How many vertices does ℳ(𝐾) have?
Problem 7.11. Prove that there does not exist a simplicial complex 𝐾
such that ℳ(𝐾) = Δ𝑛 for any 𝑛 ≥ 1.
Capitelli and Minian have shown that a simplicial complex is
uniquely determined by its Morse complex [39] up to isomorphism. The
proof is beyond the scope of this text, but we can show that a weakening
is false. Specifically, we show that this is not true up to simple homotopy
type.
Example 7.12 (Capitelli and Minian). Consider the graphs 𝐺 given by
𝑢 𝑣 𝑤
and 𝐺 ′ given by
𝑏 𝑐
We saw in Example 7.8 that ℳ(𝐺) is given by a path on four ver-
tices and it can be checked whether ℳ(𝐺′ ) is collapsible. Clearly then 𝐺
and 𝐺 ′ do not have the same simple homotopy type, but ℳ(𝐺) ↘ ∗ ↙
ℳ(𝐺 ′ ) so the Morse complexes have the same simple homotopy type.
At this point, it should be clear that the Morse complex gets out of
hand very, very quickly. Hence we will study two special cases of the
Morse complex.
7.2. Rooted forests 177
7.2. Rooted forests
In this section we limit ourselves to the study of 1-dimensional simplicial
complexes or graphs. Hence, we will employ some terminology from
graph theory. Our reference for graph theory basics and more informa-
tion is [43], but we give a quick review of commonly used terms and
notation here. Any 1-dimensional simplicial complex is also referred to
as a graph. The 0-simplices are vertices and the 1-simplices are called
edges. A graph 𝐺 such that 𝑏0 (𝐺) = 1 is called connected. A con-
nected graph 𝐺 with 𝑏1 (𝐺) = 0 is called a tree.
Example 7.13. Let 𝐺 be the graph
and consider the three primitive gradient vector fields
denoted by 𝑓0 , 𝑓1 , and 𝑓2 , respectively. Then 𝑓0 and 𝑓1 are compatible,
and 𝑓0 and 𝑓2 are compatible, since there exist the gradient vector fields
𝑓 and 𝑔 given by
178 7. The Morse complex
and
respectively. Notice that by removing all critical edges from 𝑓 and 𝑔, we
obtain two graphs, each consisting of two trees. A graph that is made
up of one or more components, each of which is a tree, is called a for-
est, appropriately enough. Furthermore, each tree “flows” via the gra-
dient vector field to a unique vertex, called the root. The root of a tree
equipped with a gradient vector field without critical edges is the unique
critical vertex. Such a graph is called a rooted forest.
Definition 7.14. Let 𝐺 be a graph. A subset 𝐹 of edges of 𝐺 along with
a choice of direction for each edge is called a rooted forest of 𝐺 if 𝐹 is
a forest as an undirected graph and each component of 𝐹 has a unique
root with respect to the given gradient vector field.
In Example 7.13, we obtained a rooted forest from a gradient vector
field, i.e., an element of the Morse complex. Conversely, you can imagine
that given a rooted forest, we can construct an element of the Morse
complex. This is summed up in the following theorem.
Theorem 7.15. Let 𝐺 be a graph. Then there is a bijection between the
simplices of ℳ(𝐺) and rooted forests of 𝐺.
Proof. Let 𝑓 = 𝜎 ∈ ℳ(𝐺). We construct a rooted forest ℛ(𝜎) of 𝐺
by orienting the edges according to the gradient of 𝑓 and removing the
critical edges. By definition, 𝑓 can be viewed as giving an orientation to a
subset of edges in 𝐺. This subset of edges (along with the corresponding
vertices) is clearly a forest. To see that it is rooted, consider any tree in
the forest. If the tree does not have a unique root, then it contains either
a directed cycle or multiple roots. Clearly a directed cycle is impossible,
since it is a tree. Multiple roots would occur if there were a vertex with
two arrows leaving it, i.e., a vertex which was the head of two arrows.
This is impossible by Lemma 2.24. Hence the gradient vector field of 𝑓
with the critical edges removed is a rooted forest of 𝐺.
7.3. The pure Morse complex 179
Now let 𝑅 be a rooted forest of 𝐺. We construct a simplex 𝒮(ℛ) ∈
ℳ(𝐺). This is accomplished by starting with the rooted forest with each
edge oriented, and then adding in the edges of 𝐺 not in 𝑅. This provides
an element 𝒮(𝑅) ∈ ℳ(𝐺) since each tree has a unique root. Hence, each
vertex and edge appears in at most one pair, creating a gradient vector
field. □
Problem 7.16. Show that the two operations in Theorem 7.15 are in-
verses of each other. That is, show that if 𝑓 ∈ ℳ(𝐺), then 𝒮(ℛ(𝑓)) = 𝑓,
and if 𝑅 is a rooted forest of 𝐺, then ℛ(𝒮(𝑅)) = 𝑅.
Given the result of Theorem 7.15, we will use “gradient vector field”
and “rooted forest” interchangeably (again, in the context of graphs).
Simplicial complexes of rooted forests were first studied by Kozlov
[101]. However, Kozlov’s work was not in the context of discrete Morse
theory. It was Chari and Joswig, cited above, who first worked within
the general framework.
Just counting the number of simplices in a Morse complex can be a
huge job. Although Theorem 7.15 gives us a way to count the number of
simplices of the Morse complex, it simply reduces the problem to one in
graph theory. Fortunately, much is known. For details on this and other
interesting counting problems in graph theory, see J. W. Moon’s classic
book [120].
7.3. The pure Morse complex
Another special class of simplicial complexes are the pure complexes.
Definition 7.17. An abstract simplicial complex 𝐾 is said to be pure if
all its facets are of the same dimension.
As mentioned above, the Morse complex is quite large and difficult
to compute. We can make our work easier by “purifying” any Morse
complex.
Definition 7.18. Let 𝑛 ∶= dim(ℳ(𝐾)). The pure Morse complex of
discrete Morse functions, denoted by ℳ𝑃 (𝐾), is the subcomplex of
ℳ(𝐾) generated by the facets of dimension 𝑛.
180 7. The Morse complex
Problem 7.19. Give an example to show that if 𝐾 is a pure simplicial
complex, then ℳ(𝐾) is not necessarily pure (and thus Definition 7.18 is
not redundant).
7.3.1. The pure Morse complex of trees. In Theorem 7.15, we found
a bijection between the Morse complex of a graph and rooted forests of
that same graph. We now investigate this connection more deeply. First,
we look at the pure Morse complex of a tree. The results in this section
are originally due to R. Ayala et al. [7, Proposition 2].
Example 7.20. Let 𝐺 be the graph
To simplify notation, we will denote a 0-simplex (𝑣, 𝑣𝑢) of ℳ(𝐺) by 𝑣𝑢,
indicating that the arrow points away from vertex 𝑣 towards vertex 𝑢 (this
notation only works when 𝐺 is a graph). It is easily verified that ℳ(𝐺)
is given by
𝑐𝑏 𝑐𝑑
𝑎𝑐
𝑑𝑐 𝑏𝑐
𝑐𝑎
Notice that there is a 1–1 correspondence between the vertices of
𝐺 and the facets of ℳ𝑃 (𝐺) = ℳ(𝐺). Furthermore, two facets in ℳ(𝐺)
share a common edge if and only if the corresponding vertices are con-
nected by an edge in 𝐺.
7.3. The pure Morse complex 181
Problem 7.21. Let 𝑇 be a tree with 𝑛 vertices.
(i) Show there is a bijection between the vertices of 𝑇 and the facets
of ℳ𝑃 (𝐺). [Hint: Given a vertex 𝑢 of 𝐺, how many rooted
forests in 𝑢 that use every edge can be constructed?]
(ii) If 𝑣 ∈ 𝐺, denote by 𝜎𝑣 the unique corresponding facet in ℳ𝑃 (𝐺).
Prove that 𝜎𝑣 is an (𝑛 − 2)-simplex.
(iii) Prove that if 𝑢𝑣 is an edge in 𝐺, then 𝜎ᵆ and 𝜎𝑣 share an (𝑛 − 3)-
face. [Hint: If 𝑢𝑣 is an edge in 𝐺, consider the gradient vector
field 𝜎𝑣 − {𝑢𝑣}.]
You have proved the following proposition.
Proposition 7.22. Let 𝑇 be a tree on 𝑛 vertices. Then ℳ𝑃 (𝐺) is the
simplicial complex constructed by replacing a vertex of 𝑇 with a copy of
Δ𝑛−2 and decreeing that two such facets share an (𝑛 − 3)-face if and only
if the corresponding vertices are connected by an edge.
As an important corollary, we obtain the following result:
Corollary 7.23. If 𝑇 is a tree with at least three vertices, then ℳ𝑃 (𝑇) is
collapsible.
Proof. We proceed by induction on 𝑛, the number of vertices of 𝑇. For
𝑛 = 3, there is only one tree, and the Morse complex of this tree we com-
puted in Example 7.8 to be 𝐾1 , which is clearly collapsible. Now suppose
that any tree on 𝑛 vertices has the property that its Morse complex is col-
lapsible for some 𝑘 ≥ 3, and let 𝑇 be any tree on 𝑛 + 1 vertices. Then
𝑇 has a leaf by Lemma 7.24, call it 𝑣. Consider the (𝑛 − 1)-dimensional
simplex 𝜎𝑣 ∈ ℳ𝑃 (𝑇). By Proposition 7.22, all the (𝑛 − 2)-dimensional
faces of 𝜎𝑣 are free, except for one (namely, the one corresponding to the
edge connecting 𝑣 to its unique vertex in 𝑇). Hence 𝜎𝑣 collapses onto
its non-free facet. It is easy to see that the resulting complex is precisely
the Morse complex of 𝑇 − 𝑣, which is a tree on 𝑛 vertices and hence
collapsible by the inductive hypothesis. Thus the result is proved. □
Lemma 7.24. Every tree with at least two vertices has at least two leaves.
Problem 7.25. Prove Lemma 7.24.
182 7. The Morse complex
7.3.2. The pure Morse complex of graphs. We now take some of the
above ideas and begin to extend them to arbitrary (connected) graphs.
The goal of this section will then be to count the number of facets in the
pure Morse complex of any graph 𝐺.
Definition 7.26. Let 𝐺 be a graph. A gradient vector field (rooted forest)
𝑓 = {𝑓0 , … , 𝑓𝑛 } on 𝐺 is said to be maximum if 𝑛 + 1 = 𝑒 − 𝑏1 where 𝑒 is
the number of edges of 𝐺 and 𝑏1 is the first Betti number of 𝐺.
In other words, a gradient vector field is maximum if “as many edges
as possible have arrows.” Clearly 𝑓 is a maximum gradient vector field
of 𝐺 if and only if 𝑓 is a facet in ℳ𝑃 (𝐺).
Exercise 7.27. Is a maximum gradient vector field on a graph always
unique? Prove it or give a counterexample.
The following exercise gives a characterization of maximum gradi-
ent vector fields on graphs. It is simply a matter of staring at both defi-
nitions and realizing that they are saying the same thing.
Exercise 7.28. Let 𝐺 be a connected graph. Prove that a gradient vec-
tor field 𝑓 on 𝐺 is maximum if and only if 𝑓 is a perfect discrete Morse
function.
It then immediately follows from this and Problem 4.13 that every
connected graph has a maximum gradient vector field.
If 𝐺 is a connected graph on 𝑣 vertices, we call any subgraph of 𝐺 on
𝑣 vertices that is a tree a spanning tree. We now determine a relation-
ship between the maximum gradient vector fields of 𝐺 and its spanning
trees. First we state a lemma. If 𝐺 is a graph, let 𝑣(𝐺) denote the number
of vertices of 𝐺 and 𝑒(𝐺) the number of edges of 𝐺.
Lemma 7.29. Let 𝑇 be a spanning tree of 𝐺. Then 𝑒(𝐺) − 𝑒(𝑇) = 𝑏1 (𝐺).
Proof. By Theorem 3.23, 𝑣 − 𝑒 = 𝑏0 − 𝑏1 for any graph. In particular, for
the spanning tree 𝑇 and graph 𝐺, we have 𝑣(𝑇)−𝑒(𝑇) = 1−0 and 𝑣(𝐺)−
𝑒(𝐺) = 1 − 𝑏1 (𝐺). Subtracting the latter equation from the former and
noting that 𝑣(𝑇) = 𝑣(𝐺) (since 𝑇 is a spanning tree), we obtain 𝑒(𝐺) −
𝑒(𝑇) = 𝑏1 (𝐺). □
7.3. The pure Morse complex 183
Theorem 7.30. Let 𝐺 be a connected graph. Then 𝑓 is a maximum
gradient vector field on 𝐺 if and only if 𝑓 is a maximum gradient vector
field on 𝑇 for some spanning tree 𝑇 ⊆ 𝐺.
Proof. We show the backward direction and leave the forward direction
as Problem 7.31. Suppose that 𝑓 = {𝑓0 , … , 𝑓𝑛 } is a maximum gradient
vector field on 𝑇 for some spanning tree 𝑇 of 𝐺 so that 𝑛 + 1 = 𝑒(𝑇) −
𝑏1 (𝑇) = 𝑒(𝑇). We need to show that 𝑓 is a maximum gradient vector
field on 𝐺. Suppose by contradiction that 𝑓 is not maximum on 𝐺 so
that 𝑛 + 1 < 𝑒(𝐺) − 𝑏1 (𝐺). This implies that 𝑒(𝑇) < 𝑒(𝐺) − 𝑏1 (𝐺). But 𝑇
is a spanning tree of 𝐺, so by Lemma 7.29 we have 𝑒(𝐺) − 𝑒(𝑇) = 𝑏1 (𝐺),
a contradiction. Thus a maximum gradient vector field on a spanning
tree 𝑇 of 𝐺 gives rise to a maximum gradient vector field of 𝐺. □
Problem 7.31. Prove the forward direction of Theorem 7.30.
Translating this into the language of the Morse complex, we obtain
the following corollary.
Corollary 7.32. Let 𝐺 be a connected graph. Then
ℳ𝑃 (𝐺) = ℳ𝑃 (𝑇𝑖 )
⋃
𝑇𝑖 ∈𝑆(𝐺)
where 𝑆(𝐺) is the set of all spanning trees of 𝐺.
Problem 7.33. Prove Corollary 7.32.
Finally, we can count the number of facets in ℳ𝑃 (𝐺). In order to do
so, we need a few other ideas from graph theory. Two vertices of a graph
are said to be adjacent if they are joined by a single edge. The number
of edges of any vertex 𝑣 is known as the degree of 𝑣, denoted by deg(𝑣).
Given a graph 𝐺 with 𝑛 vertices, we then form the 𝑛 × 𝑛 matrix 𝐿(𝐺), the
Laplacian, whose entries are given by
⎧deg(𝑣𝑖 ) if 𝑖 = 𝑗,
𝐿𝑖,𝑗 (𝐺) = 𝐿 ∶= −1 if 𝑖 ≠ 𝑗 and 𝑣𝑖 is adjacent to 𝑣𝑗 ,
⎨
⎩0 otherwise.
184 7. The Morse complex
Example 7.34. Let 𝐺 be given by
𝑣1 𝑣2
𝑣3 𝑣4
3 −1 −1 −1
⎛ ⎞
−1 2 0 −1
Then it can be easily checked that 𝐿 = ⎜ ⎟.
⎜ −1 0 2 −1 ⎟
⎝ −1 −1 −1 3 ⎠
Equipped with the Laplacian, we may utilize the following theorem
to count the number of spanning trees of 𝐺, which in turn will allow us
to count the number of facets of ℳ𝑃 (𝐺).
Theorem 7.35 (Kirchhoff’s theorem). Let 𝐺 be a connected graph on
𝑛 vertices with 𝜆1 , … , 𝜆𝑛−1 the non-zero eigenvalues of 𝐿(𝐺). Then the
1
number of spanning trees of 𝐺 is given by 𝜆1 ⋯ 𝜆𝑛−1 .
𝑛
A proof of this fact goes beyond the scope of this text but may be
found in [43, Theorem 4.15].
Theorem 7.36. Let 𝐺 be a connected graph on 𝑛 vertices. Then
the number of facets of ℳ𝑃 (𝐺) is 𝜆1 ⋯ 𝜆𝑛−1 . Equivalently, there are
𝜆1 ⋯ 𝜆𝑛−1 maximum gradient vector fields of 𝐺.
Proof. By definition, the facets of 𝑀𝑃 (𝐺) are in 1–1 correspondence
with the maximum gradient vector fields on 𝐺. By Theorem 7.30, the
maximum gradient vector fields on 𝐺 are precisely those of all the span-
ning trees of 𝐺. Given a spanning tree of 𝐺 and a vertex of 𝐺, there exists
exactly one forest rooted in 𝑣. Hence, each spanning tree determines 𝑛
1
different gradient vector fields. Since there are 𝜆1 ⋯ 𝜆𝑛−1 by Theorem
𝑛
1
7.35, there are exactly 𝑛 ⋅ ( 𝜆1 ⋯ 𝜆𝑛−1 ) = 𝜆1 ⋯ 𝜆𝑛−1 maximum gradient
𝑛
vector fields on 𝐺 and hence 𝜆1 ⋯ 𝜆𝑛−1 facets of ℳ𝑃 (𝐺). □
7.3. The pure Morse complex 185
Problem 7.37. Count the number of facets of ℳ𝑃 (𝐺) for 𝐺 in Example
7.34 (if you know how to compute eigenvalues).
Problem 7.38. Compute the Laplacian of the graph in Example 7.20
and use Theorem 7.36 to compute the number of facets of ℳ𝑃 (𝐺). Does
your answer match that computed by hand in the example?
Chapter 8
Morse homology
We saw in Chapter 4 how the theory of simplicial homology relates to
discrete Morse theory. This relationship is most clearly seen in Theo-
rem 4.1, where we proved that the number of critical 𝑖-simplices always
bounds from above the 𝑖th 𝔽2 -Betti number. But simplicial homology is
not the only means by which to compute homology. The discrete Morse
inequalities just mentioned point to a much deeper and more profound
relationship between discrete Morse theory and homology. The goal of
this chapter will be to develop a theory to compute simplicial homology
using discrete Morse theory. Once we develop Morse homology, we will
use it in Section 8.5 to make computing Betti numbers of certain simpli-
cial complexes easier. We will furthermore utilize this theory to perform
computations algorithmically in Section 9.2.
Let 𝑓 be a discrete Morse function on 𝐾. In order to define Morse
homology, we undertake a deeper study of 𝑉 = 𝑉𝑓 , the induced gradient
vector field. In particular, we will view it as a function. Using this, we
will construct a collection of 𝔽2 -vector spaces 𝕜Φ𝑝 (𝐾), along with linear
transformations between them, just as we did in Section 3.2. The chain
complex consisting of these vector spaces 𝕜Φ 𝑝 (𝐾) taken together with lin-
ear transformations between them is referred to as the flow complex. It
will then be shown that this chain complex is the same as another chain
187
188 8. Morse homology
complex, called the critical complex1 . The vector spaces in this chain
complex are not only much smaller than the ones in the standard chain
complex, but the boundary maps can be computed using a gradient vec-
tor field. From there, just as before, we can construct homology. We
will then prove that this way of constructing homology is the same as
our construction of homology in Section 3.2. Thus the critical complex
has the distinct advantage that since the matrices and vector spaces are
much smaller, homology is easier to compute.
8.1. Gradient vector fields revisited
We first view the gradient vector field as a function. Recall from Sec-
tion 3.2 that if 𝐾 is a simplicial complex, 𝐾𝑝 denotes the set of all 𝑝-
dimensional simplices of 𝐾, 𝑐𝑝 ∶= |𝐾𝑝 | is the size of 𝐾𝑝 , and 𝕜𝑐𝑝 is
the unique vector space of dimension 𝑐𝑝 with each element of 𝐾𝑝 cor-
responding to a basis element of 𝕜𝑐𝑝 . If 𝑓 is a discrete Morse function
on 𝐾, the induced gradient vector field was defined in Chapter 2 by
𝑉 = 𝑉𝑓 ∶= {(𝜎(𝑝) , 𝜏(𝑝+1) ) ∶ 𝜎 < 𝜏, 𝑓(𝜎) ≥ 𝑓(𝜏)}.
Definition 8.1. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function with 𝑉 = 𝑉𝑓
the induced gradient vector field. Define a function 𝑉𝑝 ∶ 𝕜𝑐𝑝 → 𝕜𝑐𝑝+1 by
𝜏 if ∃ (𝜎, 𝜏) ∈ 𝑉,
𝑉𝑝 (𝜎) ∶= {
0 otherwise .
Abusing language and notation, the set of all functions 𝑉 ∶= {𝑉𝑝 } for 𝑝 =
0, 1, … , dim(𝐾) will be referred to as the gradient vector field induced
by 𝑓 (or 𝑉).
Exercise 8.2. Prove that if 𝜎 is critical, then 𝑉(𝜎) = 0. Does the converse
hold?
1
In the literature, this is sometimes referred to as the Morse complex, so be sure not to confuse it
with the object defined in Chapter 7.
8.1. Gradient vector fields revisited 189
Example 8.3. Let 𝑉 be the gradient vector field
𝑎2 𝑎1
𝑎6 𝑎7
𝑎3
𝑎4
𝑎5 𝑎8 𝑎9 𝑎10
Then 𝑉 is simply associating the corresponding head to any vertex
that is a tail; for example, 𝑉0 (𝑎4 ) = 𝑎4 𝑎3 , 𝑉0 (𝑎3 ) = 𝑎3 𝑎6 , and 𝑉1 (𝑎4 𝑎3 ) =
𝑉0 (𝑎2 ) = 0.
We can also recast some known facts about the gradient vector field
discussed in Chapter 2 from the function viewpoint.
Proposition 8.4. Let 𝑉 be a gradient vector field on a simplicial complex
𝐾 and 𝜎(𝑝) ∈ 𝐾. Then
(i) 𝑉𝑖+1 ∘ 𝑉𝑖 = 0 for all integers 𝑖 ≥ 0;
(ii) |{𝜏(𝑝−1) ∶ 𝑉(𝜏) = 𝜎}| ≤ 1;
(iii) 𝜎 is critical if and only if 𝜎 ∉ Im(𝑉) and 𝑉(𝜎) = 0.
Proof. We suppress the subscripts when there is no confusion or need
to refer to dimension. If 𝜏 is not the tail of an arrow (the first element
in an ordered pair in 𝑉), then 𝑉(𝜏) = 0. Hence, assume (𝜏, 𝜎) ∈ 𝑉 so
that 𝑉(𝜏) = 𝜎. By Lemma 2.24, since 𝜎 is the head of an arrow in 𝑉, it
cannot be the tail of any arrow in 𝑉; hence 𝑉(𝑉(𝜏)) = 𝑉(𝜎) = 0. The
same lemma immediately yields (ii). Part (iii) is Problem 8.5. □
Problem 8.5. Prove part (iii) of Proposition 8.4.
8.1.1. Gradient flow. Look back at Example 8.3. We would now like
to use our new understanding of 𝑉 to help create a description of the
“flow” at a vertex, say 𝑎4 ; that is, starting at 𝑎4 , we would like an alge-
braic way to determine where one is sent after following a sequence of
190 8. Morse homology
arrows for a fixed number of steps and, ultimately, determine if one ends
up flowing to certain simplices and staying there indefinitely. Think, for
example, of a series of vertical pipes with a single inflow but emptying
in several locations. Water poured into the opening flows downward
through several paths and ultimately ends in those several locations.
Following the arrows in Example 8.3, we see that the flow starting
at 𝑎4 ultimately leads us to vertex 𝑎9 . But what would the flow mean
in higher dimensions? The next definition will define such a flow in
general.
Recall that for 𝑝 ≥ 1, the boundary operator 𝜕𝑝 ∶ 𝕜𝑐𝑝 → 𝕜𝑐𝑝−1 is
defined by 𝜕𝑝 (𝜎) ∶= ∑0≤𝑗≤𝑝 (𝜎−{𝜎𝑖𝑗 }) = ∑0≤𝑗≤𝑝 𝜎𝑖0 𝜎𝑖1 ⋯ 𝜎𝑖𝑗̂ ⋯ 𝜎𝑖𝑝 where
𝜎𝑖̂ 𝑗 excludes the value 𝜎𝑖̂ 𝑗 (Definition 3.12).
Definition 8.6. Let 𝑉 be a gradient vector field on 𝐾. Define Φ𝑝 ∶ 𝕜𝑐𝑝 →
𝕜𝑐𝑝 given by Φ𝑝 (𝜎) ∶= 𝜎 + 𝜕𝑝+1 (𝑉𝑝 (𝜎)) + 𝑉𝑝−1 (𝜕𝑝 (𝜎)) to be the gradient
flow or flow of 𝑉. We write Φ(𝜎) = 𝜎 + 𝜕(𝑉(𝜎)) + 𝑉(𝜕(𝜎)) when 𝑝 is
clear from the context.
Remark 8.7. It is important not to confuse the flow of a simplex with a
𝑉-path at that simplex. One important distinction is that while 𝑉-paths
only look at arrows in dimensions 𝑝 and 𝑝 + 1, flows may take into ac-
count arrows in many dimensions.
8.1. Gradient vector fields revisited 191
Let 𝑓 ∶ 𝐴 → 𝐴 be any function and write 𝑓𝑛 ∶= 𝑓 ∘ 𝑓 ∘ ⋯ ∘ 𝑓 for
the composition of 𝑓 with itself 𝑛 times. Then 𝑓 is said to stabilize if
there is an integer 𝑚 and an 𝑥 ∈ 𝐴 such that 𝑓 ∘ 𝑓𝑚 (𝑥) = 𝑓𝑚 (𝑥). We are
interested in determining whether or not a flow Φ stabilizes, and hence
in computing Φ𝑛 for 𝑛 ≥ 1. Since the values of Φ depend on the chosen
gradient vector field 𝑉, we sometimes say that 𝑉 stabilizes if Φ stabilizes.
Problem 8.8. Apply the definition of Φ iteratively starting with 𝑎4 from
Example 8.3 until it stabilizes.
Example 8.9. It can be a little tricky to intuitively grasp what the flow
is doing in higher-dimensional simplicial complexes, especially when
there are multiple ways in which to flow. Let us look at a more complex
example. Let 𝑉 be the gradient vector field below.
𝑎8 𝑎9
𝑎6
𝑎5 𝑎7
𝑎1 𝑎2 𝑎3 𝑎4
We will compute the flow starting at 𝑎2 𝑎5 . Before we do the com-
putation, take a moment to think what you expect the answer to be. To
make the computation easier, note that Φ(𝑎2 𝑎5 ) = 𝑎2 𝑎6 + 𝑎5 𝑎6 , Φ(𝑎2 𝑎6 )
= 𝑎2 𝑎3 +𝑎3 𝑎6 +𝑎6 𝑎8 , Φ(𝑎5 𝑎6 ) = 𝑎5 𝑎8 , Φ(𝑎2 𝑎3 ) = 𝑎2 𝑎3 +𝑎3 𝑎6 , Φ(𝑎3 𝑎6 ) =
𝑎6 𝑎8 , Φ(𝑎6 𝑎8 ) = 0, and Φ(𝑎5 𝑎8 ) = 𝑎5 𝑎8 . We thus compute
Φ(𝑎2 𝑎5 ) = 𝑎2 𝑎6 + 𝑎5 𝑎6 ,
Φ(𝑎2 𝑎6 + 𝑎5 𝑎6 ) = 𝑎2 𝑎3 + 𝑎3 𝑎6 + 𝑎6 𝑎8 + 𝑎5 𝑎8 ,
Φ(𝑎2 𝑎3 + 𝑎3 𝑎6 + 𝑎6 𝑎8 + 𝑎5 𝑎6 ) = 𝑎2 𝑎3 + 𝑎3 𝑎6 + 𝑎6 𝑎8 + 𝑎5 𝑎8 .
After the first flow, we find ourselves on 𝑎2 𝑎6 and 𝑎5 𝑎6 , with two
2-faces to flow into. From there, we find ourselves “stuck” on each of
192 8. Morse homology
their boundaries. In other words, Φ2 (𝑎2 𝑎5 ) = Φ𝑛 (𝑎2 𝑎5 ) for every 𝑛 ≥ 2
so that Φ stabilizes at 𝑎2 𝑎5 with Φ𝑛 (𝑎2 𝑎5 ) = 𝑎2 𝑎3 + 𝑎3 𝑎6 + 𝑎6 𝑎8 + 𝑎5 𝑎8
for 𝑛 ≥ 2.
Problem 8.10. Verify the computations in Example 8.9; that is, prove
that
(i) Φ(𝑎2 𝑎5 ) = 𝑎2 𝑎6 + 𝑎5 𝑎6 ;
(ii) Φ(𝑎2 𝑎6 ) = 𝑎2 𝑎3 + 𝑎3 𝑎6 + 𝑎6 𝑎8 ;
(iii) Φ(𝑎5 𝑎6 ) = 𝑎5 𝑎8 ;
(iv) Φ(𝑎2 𝑎3 ) = 𝑎2 𝑎3 + 𝑎3 𝑎6 ;
(v) Φ(𝑎3 𝑎6 ) = 𝑎6 𝑎8 ;
(vi) Φ(𝑎6 𝑎8 ) = 0;
(vii) Φ(𝑎5 𝑎8 ) = 𝑎5 𝑎8 .
Not only can we compute the flow of a single simplex iteratively,
but we can also compute the flow of an entire simplicial complex or sub-
complex by decreeing that if 𝜎, 𝜏 ∈ 𝐾 are simplices, then Φ(𝜎 ∪ 𝜏) ∶=
Φ(𝜎) + Φ(𝜏) where the sum is modulo 2. For example, using the simpli-
cial complex from Example 8.9, we can compute
Φ(⟨𝑎2 𝑎5 ⟩) = Φ(𝑎2 ∪ 𝑎2 𝑎5 ∪ 𝑎5 )
= 0 + 𝑎2 𝑎6 + 𝑎5 𝑎6 + 0
= 𝑎2 𝑎6 + 𝑎5 𝑎6 .
Problem 8.11. Let 𝐾 be a simplicial complex and let Φ(𝐾) ∶=
{∑𝜍∈𝐾 Φ(𝜎)}. Viewing each summand element as a simplex, is Φ(𝐾)
necessarily a simplicial complex? Prove it or give a counterexample.
We compared stabilization of a flow to water flowing through pipes
and ultimately emptying at the bottom. But perhaps this comparison is
8.1. Gradient vector fields revisited 193
misleading. Maybe the water loops in a cycle in the pipes or flows within
the pipes forever in an erratic way. Turning to mathematical flow, this
raises the natural question: if 𝑉 is a gradient vector field on 𝐾, does 𝑉
stabilize for every 𝜎 ∈ 𝐾? Since 𝐾 is finite, if 𝑉 does not stabilize, then
it eventually “loops,” becomes periodic, or maybe does something else.
Applying what we know about discrete Morse functions, we would be
surprised if 𝑉 does become periodic.
Exercise 8.12. Give an example to show that a discrete vector field need
not stabilize at every simplex.
Problem 8.13. Let Φ be the flow induced by a gradient vector field 𝑉
on a simplicial complex 𝐾. Prove that Φ stabilizes at every vertex.
Problem 8.13 is a precursor to the more general result that Φ stabi-
lizes at all simplices whenever we have a gradient vector field. Before
we show this, we need a few other results that tell us more about what
the flow looks like. First we present a lemma.
Lemma 8.14. Let 𝐾 be a simplicial complex, 𝜕 the boundary operator,
and Φ a flow on 𝐾. Then Φ𝜕 = 𝜕Φ.
Problem 8.15. Prove Lemma 8.14.
The following technical result gives us a better understanding of the
flow of a 𝑝-simplex in terms of a linear combination of all 𝑝-simplices.
Proposition 8.16. Let 𝜎1 , … , 𝜎𝑟 be the 𝑝-dimensional simplices of 𝐾 and
write Φ(𝜎𝑖 ) = ∑𝑗 𝑎𝑖𝑗 𝜎𝑗 (i.e., as a linear combination of 𝑝-simplices) where
𝑎𝑖𝑗 is either 0 or 1. Then 𝑎𝑖𝑖 = 1 if and only if 𝜎𝑖 is critical. Furthermore,
if 𝑎𝑖𝑗 = 1, then 𝑓(𝜎𝑗 ) < 𝑓(𝜎𝑖 ).
Proof. By Proposition 8.4, the 𝑝-simplex 𝜎𝑖 satisfies exactly one of the
following: 𝜎𝑖 is critical, 𝜎𝑖 ∈ Im(𝑉), or 𝑉(𝜎𝑖 ) ≠ 0. We proceed by consid-
ering the different cases.
Suppose 𝜎𝑖 is critical. We show that 𝜎𝑖 appears in Φ(𝜎𝑖 ) and that
𝑓(𝜎𝑗 ) < 𝑓(𝜎𝑖 ) whenever 𝜎𝑗 appears in Φ(𝜎𝑖 ). Since 𝜎𝑖 is critical, 𝑉(𝜎𝑖 ) = 0
and for any codimension-1 face 𝜏 < 𝜎𝑖 , 𝑓(𝜏) < 𝑓(𝜎𝑖 ). By definition,
𝑉(𝜏) = 0 or 𝑉(𝜏) = 𝜎𝜏̃ with 𝑓(𝜎𝜏̃ ) ≤ 𝑓(𝜏) < 𝑓(𝜎𝑖 ). Using these facts, we
194 8. Morse homology
see that
Φ(𝜎𝑖 ) = 𝜎𝑖 + 𝑉(𝜕𝜎𝑖 ) + 0
= 𝜎𝑖 + 𝑉 ∑ 𝜏
𝜏
= 𝜎𝑖 + ∑ 𝑉𝜏
𝜏
= 𝜎𝑖 + ∑ 𝜎𝜏̃
𝜏
where all values 𝜎𝜏̃ satisfy 𝑓(𝜎𝜏̃ ) < 𝑓(𝜎𝑖 ).
Now suppose that 𝜎𝑖 ∈ Im(𝑉). Then there exists 𝜂 ∈ 𝑉 such that
𝑉(𝜂) = 𝜎𝑖 . By Proposition 8.4(i), 𝑉 ∘ 𝑉 = 0, yielding
Φ(𝜎𝑖 ) = 𝜎𝑖 + 𝑉(𝜕𝜎𝑖 ) + 𝜕(𝑉(𝑉(𝜂))) = 𝜎𝑖 + ∑ 𝑉(𝜏)
𝜏
where, as before, 𝜏 < 𝜎𝑖 is a codimension-1 face of 𝜎𝑖 . Next, by Proposi-
tion 8.4(ii), 𝜂 is the unique codimension-1 face of 𝜎𝑖 satisfying 𝑉(𝜂) = 𝜎𝑖 ,
and hence
Φ(𝜎𝑖 ) = 𝜎𝑖 + 𝑉(𝜂) + ∑ 𝑉(𝜏) = 𝜎𝑖 + 𝜎𝑖 + ∑ 𝑉(𝜏) = ∑ 𝑉(𝜏).
𝜏,𝜏≠𝜂 𝜏,𝜏≠𝜂 𝜏,𝜏≠𝜂
Furthermore, any other codimension-1 face 𝜏 < 𝜎𝑖 satisfies 𝑉(𝜏) = 0 or
𝑉(𝜏) = 𝜎˜𝑖 with 𝑓(˜
𝜎𝑖 ) ≤ 𝑓(𝜂) < 𝑓(𝜎𝑖 ). Thus
Φ(𝜎𝑖 ) = ∑ 𝜎˜𝑖 .
Finally, suppose that 𝑉(𝜎𝑖 ) = 𝜏, 𝜏 ≠ 0. For each codimension-1 face
𝜂 < 𝜎𝑖 , either 𝑉(𝜂) = 0 or 𝑉(𝜂) = 𝜎˜𝑖 where 𝑓(˜𝜎𝑖 ) ≤ 𝑓(𝜂) ≤ 𝑓(𝜎𝑖 ) so that
𝑉(𝜕𝜎𝑖 ) = ∑ 𝜎˜𝑖 . Furthermore, 𝜕(𝑉𝜎𝑖 ) = 𝜕(𝜏) = 𝜎𝑖 + ∑ 𝜎˜𝑖 . Combining
these facts, we obtain
Φ(𝜎𝑖 ) = 𝜎𝑖 + 𝑉𝜕𝜎𝑖 + 𝜕𝑉𝜎𝑖 = 𝜎𝑖 + (∑ 𝜎˜𝑖 ) + 𝜎𝑖 = ∑ 𝜎˜𝑖 .
Note that the only time in the above three cases at which 𝑎𝑖𝑖 = 1 is
when 𝜎𝑖 is critical. □
8.2. The flow complex 195
8.2. The flow complex
Now we are ready to define the flow complex discussed at the beginning
of this chapter. Let 𝑓 ∶ 𝐾 → ℝ be a discrete Morse function. For 𝐾 an
𝑐𝑝
𝑛-dimensional complex, let 𝕜Φ 𝑝 (𝐾) = {𝑐 ∈ 𝕜 ∶ Φ(𝑐) = 𝑐}. We write
Φ
𝕜𝑝 when 𝐾 is clear from the context. Note that while 𝑐𝑝 indicates the
dimension of the vector space 𝕜𝑐𝑝 , the 𝑝 in 𝕜Φ𝑝 indicates that it is the
vector space induced by linear combinations of certain 𝑝-simplices (and
consequently the dimension of the vector space is not obvious).
Problem 8.17. Show that the boundary operator 𝜕𝑝 ∶ 𝕜𝑐𝑝 → 𝕜𝑐𝑝−1 can
Φ
be restricted to 𝜕𝑝 ∶ 𝕜Φ
𝑝 → 𝕜𝑝−1 .
Given the result of Problem 8.17, we then obtain the chain complex
0 / 𝕜Φ
𝑛
𝜕 / 𝕜Φ
𝑛−1
𝜕 /⋯ 𝜕 / 𝕜Φ
0
/ 0,
which is called the flow complex of 𝐾 and denoted by 𝕜Φ Φ
∗ (𝐾) = 𝕜∗ . Let
Φ
𝑐 ∈ 𝕜𝑝 (𝐾). Then 𝑐 is an 𝔽2 -linear combination of 𝑝-simplices from 𝐾;
i.e., 𝑐 = ∑𝜍∈𝐾 𝑎𝜍 𝜎 where 𝑎𝜍 ∈ 𝔽2 . For a fixed 𝑐 ∈ 𝕜Φ ∗
𝑝 , call any 𝜎 ∈ 𝕜𝑝
Φ
𝑝
a maximizer of 𝑐 if 𝑓(𝜎∗ ) ≥ 𝑓(𝜎) for all 𝜎 ∈ 𝕜Φ
𝑝 with 𝑎𝜍 ≠ 0.
Lemma 8.18. If 𝜎∗ is a maximizer of some 𝑐 ∈ 𝕜Φ ∗
𝑝 (𝐾), then 𝜎 is critical.
Proof. Write 𝑐 = ∑𝜍∈𝐾 𝑎𝜍 𝜎 and apply Φ to both sides, yielding Φ(𝑐) =
𝑝
∑𝜍∈𝐾 𝑎𝜍 Φ(𝜎). Since 𝑐 ∈ 𝕜Φ 𝑝 (𝐾), we have that 𝑐 = Φ(𝑐) = ∑𝜍∈𝐾𝑝 𝑎𝜍 Φ(𝜎).
𝑝
Now 𝜎 is a maximizer of 𝑐, so 𝑓(𝜎∗ ) ≥ 𝑓(𝜎) whenever 𝑎𝜍 ≠ 0. We thus
∗
apply the contrapositive of the last statement in Proposition 8.16 to Φ(𝜎∗ )
to see that all coefficients are 0 other than 𝑎𝜍∗ , the coefficient in front of
𝜎∗ , which is equal to 1. By the same proposition, 𝜎∗ is critical. □
We are now ready to prove that the gradient flow stabilizes for any
𝜎 ∈ 𝐾.
Theorem 8.19. Let 𝑐 ∈ 𝕜𝑐𝑝 . Then the flow Φ stabilizes at 𝑐; i.e., there is
an integer 𝑁 such that Φ𝑖 (𝑐) = Φ𝑗 (𝑐) for all 𝑖, 𝑗 ≥ 𝑁.
196 8. Morse homology
Proof. It suffices to show the result for any 𝜎 ∈ 𝐾, as the general state-
ment follows by linearity. Hence let 𝜎 ∈ 𝐾 and 𝑟𝜍 = 𝑟 ∶= |{𝜎̃ ∈ 𝐾 ∶
𝑓(𝜎)̃ < 𝑓(𝜎)}|. We proceed by induction on 𝑟. For 𝑟 = 0, 𝑓(𝜎)̃ ≥ 𝑓(𝜎) for
all 𝜎̃ ∈ 𝐾, and hence Φ(𝜎) = 𝜎 or Φ(𝜎) = 0, yielding the base case.
Now suppose that 𝑟 ≥ 0. We consider 𝜎 regular and 𝜎 critical. If 𝜎
is regular, then Φ(𝜎) = ∑𝑓(˜𝜍)<𝑓(𝜍) 𝑎𝜍˜ 𝜎̃ by Proposition 8.16. Now 𝑟𝜍˜ ⊆ 𝑟𝜍
with 𝜎̃ ∈ 𝑟𝜍 and 𝜎̃ ∉ 𝑟𝜍˜ , so |𝑟𝜍˜ | < |𝑟𝜍 | for all 𝜎̃ with 𝑓(𝜎)̃ < 𝑓(𝜎). By the
inductive hypothesis, there exists 𝑁𝜍˜ such that for all 𝑖, 𝑗 ≥ 𝑁𝜍˜ , Φ𝑖 (𝜎)̃ =
Φ𝑗 (𝜎).
̃ Since Φ is linear, 𝜎̃𝑁 is fixed for 𝑁 > max𝑓(˜𝜍)<𝑓(𝜍) {𝑁𝜍˜ }.
Next, assume 𝜎 is critical, and write 𝑐 ∶= 𝑉(𝜕𝜎). Then 𝑐 =
∑𝑓(˜𝜍)<𝑓(𝑐) 𝑎𝜍˜ 𝜎̃ by the proof of Proposition 8.16. By Problem 8.20 (see be-
low), we may write Φ𝑚 (𝜎) = 𝜎+𝑐+Φ(𝑐)+⋯+Φ𝑚−1 (𝑐). It thus suffices to
show that there is an 𝑁 such that Φ𝑁 (𝑐) = 0. Since 𝑐 = ∑𝑓(˜𝜍)<𝑓(𝑐) 𝑎𝜍˜ 𝜎,̃
the inductive hypothesis applied to each 𝜎̃ implies there exists 𝑁 ˜ such
˜
that Φ𝑁 (𝑐) ∈ 𝕜Φ ∗ is stable. Observe that if 𝜏 ∈ Im(𝑉), then Φ(𝑉𝜏) =
𝑉𝜏 + 𝜕𝑉𝑉𝜏 + 𝑉𝜕𝑉𝜏 = 𝑉𝜏 + 𝑉𝜕𝑉𝜏 = 𝑉(𝜏 + 𝜕𝑉𝜏). In other words, Φ sends
˜
elements in Im(𝑉) to Im(𝑉). Since 𝑐 ∈ Im(𝑉), Φ𝑁 (𝑐) ∈ Im(𝑉). Write
̃ ̃
Φ𝑁 (𝑐) = 𝑉(𝑤) where 𝑤 = ∑𝜏 𝑎𝜏 𝜏. Then Φ𝑁 (𝑐) = 𝑉(𝑤) = ∑𝜏 𝑎𝜏 𝑉𝜏.
It follows by Proposition 8.4(iii) that 𝑎𝜏 = 0 whenever 𝜏 is critical. Fur-
thermore, any maximizer of 𝐴 ∶= {𝑓(𝜏) ∶ 𝑎𝜏 ≠ 0} is critical by Lemma
8.18. But if 𝜏 ∈ 𝐴 is a maximizer, then 𝜏 is critical and 𝑎𝜏 ≠ 0, a contra-
˜
diction. Thus 𝐴 = ∅ so that 𝑎𝜏 = 0 for all 𝜏 ∈ 𝐾𝑝 ; i.e., Φ𝑁 (𝑐) = 0. As
mentioned above, this is the desired result. □
Problem 8.20. Let 𝜎 ∈ 𝐾 and write 𝑐 ∶= 𝑉(𝜕𝜎). Prove that if 𝜎 is critical,
then Φ𝑚 (𝜎) = 𝜎 + 𝑐 + Φ(𝑐) + Φ2 (𝑐) + ⋯ + Φ𝑚−1 (𝑐).
8.3. Equality of homology
We have just shown that for any 𝑐 ∈ 𝕜𝑐𝑝 (𝐾), the flow starting at 𝑐 even-
tually stabilizes to some element in 𝕜Φ ∞
𝑝 . Denote this element by Φ (𝑐)
∞ 𝑐𝑝 Φ
so that we obtain a function Φ ∶ 𝕜 → 𝕜𝑝 . Using this, we relate the
homology of the chain complex 𝕜Φ ∗ (𝐾) with simplicial homology defined
8.3. Equality of homology 197
in Section 3.2. In order to do this, we need a few more ideas from linear
algebra. Refer back to Section 3.1 if needed. First, recall that a chain
complex
𝜕𝑖+1 𝜕𝑖 𝜕𝑖−1 𝜕2 𝜕1 𝜕0
⋯ / 𝐶𝑖 / 𝐶𝑖−1 /⋯ / 𝐶1 / 𝐶0 /0
is a sequence of vector spaces 𝐶𝑖 where each 𝜕𝑖 is a linear transformation
with the property that 𝜕𝑖−1 ∘𝜕𝑖 = 0 for all 𝑖. The chain complex is denoted
by (𝐶∗ , 𝜕∗ ) for short, and any element 𝑐 ∈ 𝐶𝑖 is called a chain. By your
work in Problem 3.22, this implies that Im(𝜕𝑖+1 ) ⊆ ker(𝜕𝑖 ), and hence
we defined the 𝑖th homology vector space of the chain complex 𝐶∗ by
𝐻𝑖 (𝐶∗ ) ∶= 𝕜null 𝜕𝑖 −rank 𝜕𝑖+1 . To see how 𝑣 ∈ 𝐶𝑖 relates to an element in
𝐻𝑖 (𝐶∗ ), we give a more precise definition of 𝐻𝑖 (𝐶∗ ). Let 𝑧 ∈ ker(𝜕𝑖 ) and
define [𝑧] ∶= {𝑧 + 𝑤 ∶ 𝑤 ∈ Im(𝜕𝑖+1 )}. Since Im(𝜕𝑖+1 ) ⊆ ker(𝜕𝑖 ), such
a definition makes sense. Then we may define 𝐻𝑖 (𝐶∗ ) ∶= {[𝑧] ∶ 𝑧 ∈
ker(𝜕𝑖 )} with a vector space structure [𝑧] + [𝑣] ∶= [𝑧 + 𝑣]. It is easy to
show that this is a vector space of dimension null 𝜕𝑖 −rank 𝜕𝑖+1 and hence
we recover the same definition as before. The advantage here is that
𝐻𝑖 (𝐶∗ ) is now defined in terms of elements of 𝐶𝑖 . Notice that elements
of 𝐻𝑖 are sets. Viewing homology in this way, suppose there is another
chain complex (𝐶∗′ , 𝜕∗′ ) along with linear transformations 𝑓𝑖 ∶ 𝐶𝑖 → 𝐶𝑖
such that 𝜕𝑖 ∘ 𝑓𝑖−1 = 𝑓𝑖 ∘ 𝜕𝑖′ for all 𝑖:
𝜕𝑖+1 𝜕𝑖 𝜕𝑖−1 𝜕2 𝜕1 𝜕0
⋯ / 𝐶𝑖 / 𝐶𝑖−1 /⋯ / 𝐶1 / 𝐶0 /0
𝑓𝑖 𝑓𝑖−1 𝑓1 𝑓0
′
𝜕𝑖+1 𝜕𝑖′ ′
𝜕𝑖−1 𝜕2′ 𝜕1′ 𝜕0′
⋯ / 𝐶′ / 𝐶′ /⋯ / 𝐶1′ / 𝐶0′ /0
𝑖 𝑖−1
In this case, we say that we have a commutative diagram. Such
a sequence of 𝑓𝑖 is called a chain map. The chain map induces a map
(𝑓𝑖 )∗ on the homology vector spaces, (𝑓𝑖 )∗ ∶ 𝐻𝑖 (𝐶∗ ) → 𝐻𝑖 (𝐶∗′ ), by defining
(𝑓𝑖 )∗ ([𝑧]) ∶= [𝑓𝑖 (𝑧)]. We sometimes write 𝑓 instead of 𝑓𝑖 .
198 8. Morse homology
Exercise 8.21. Show that [𝑧] = [𝑧′ ] if and only if 𝑧 − 𝑧′ = 𝑤 for some
𝑤 ∈ Im(𝜕).
Proposition 8.22. The function (𝑓)∗ is a well-defined linear transfor-
mation. That is, if [𝑧] = [𝑧′ ], then [𝑓(𝑧)] = [𝑓(𝑧′ )].
Proof. Let [𝑧] = [𝑧′ ]. By Exercise 8.21, 𝑧 − 𝑧′ = 𝜕(𝑣) for some 𝑣. We
have
𝑧 − 𝑧′ = 𝜕(𝑣),
′
𝑓(𝑧 − 𝑧 ) = 𝑓 ∘ 𝜕(𝑣),
′
𝑓(𝑧) − 𝑓(𝑧 ) = 𝜕 ∘ 𝑓(𝑣),
where the last line is justified since 𝑓 is a chain map.
Now we show that 𝑓∗ is linear. Let [𝑐], [𝑐′ ] ∈ 𝐻∗ (𝐶∗ ). Then 𝑓∗ ([𝑐] +
[𝑐 ]) = 𝑓∗ ([𝑐 + 𝑐′ ]) = [𝑓(𝑐 + 𝑐′ )] = [𝑓(𝑐) + 𝑓(𝑐′ )] = [𝑓(𝑐)] + [𝑓(𝑐′ )] =
′
𝑓∗ ([𝑐]) + 𝑓∗ ([𝑐′ ]) so that 𝑓∗ is linear. □
For any set 𝐴, let id𝐴 ∶ 𝐴 → 𝐴 be the function defined by id𝐴 (𝑎) = 𝑎
for every 𝑎 ∈ 𝐴. Such a function is called the identity map on 𝐴.
Problem 8.23. Let 𝑓𝑖 ∶ 𝑉𝑖 → 𝑊𝑖 and 𝑔𝑖 ∶ 𝑊𝑖 → 𝑍𝑖 be two sets of chain
maps. Prove that
(i) (𝑔𝑖 ∘ 𝑓𝑖 )∗ = (𝑔𝑖 )∗ ∘ (𝑓𝑖 )∗ ;
(ii) if id𝑉𝑖 ∶ 𝑉𝑖 → 𝑉𝑖 is the identity, then (id𝑉𝑖 )∗ = id𝐻∗ (𝑉𝑖 ) .
Now let 𝐾 be a simplicial complex with (𝕜∗ , 𝜕∗ ) the chain complex
defined in Section 3.2 and (𝕜Φ
∗ , 𝜕∗ ) the flow complex. Then for each 𝑝,
𝑐𝑝
we have the inclusion map 𝑖𝑝 ∶ 𝕜Φ 𝑝 → 𝕜 defined by 𝑖𝑝 (𝑐) = 𝑐. This is a
chain map as the following diagram commutes:
𝜕𝑝+1 𝜕𝑝 𝜕𝑝−1 𝜕2 𝜕1 𝜕0
⋯ / 𝕜Φ
𝑝
/ 𝕜Φ
𝑝−1
/⋯ / 𝕜Φ
1
/ 𝕜Φ
0
/0
𝑖𝑝 𝑖𝑝−1 𝑖1 𝑖0
𝜕𝑝+1 𝜕𝑝 𝜕𝑝−1 𝜕2 𝜕1 𝜕0
⋯ / 𝕜𝑐𝑝 / 𝕜𝑐𝑝−1 /⋯ / 𝕜𝑐1 / 𝕜𝑐 0 /0
A linear transformation 𝑓 ∶ 𝑉 → 𝑊 is a vector space isomor-
phism if there is a linear transformation 𝑔 ∶ 𝑊 → 𝑉 such that 𝑔∘𝑓 = id𝑉
8.4. Explicit formula for homology 199
and 𝑓 ∘ 𝑔 = id𝑊 . In this case, we say that 𝑉 and 𝑊 are isomorphic, de-
noted by 𝑉 ≅ 𝑊. Our main result in this section is that the homology
obtained from the chain complex (𝕜∗ , 𝜕∗ ) and the homology obtained
from the flow complex are isomorphic.
Theorem 8.24. For all 𝑝 ≥ 0, we have 𝐻𝑝 (𝕜Φ
∗ ) ≅ 𝐻𝑝 (𝐾).
Proof. In order to show that 𝐻𝑝 (𝑘Φ ∞
∗ ) ≅ 𝐻𝑝 (𝐾), we will show that Φ∗ is
∞ ∞
an isomorphism. Since Φ ∘ 𝑖𝑝 = id𝕜𝑝Φ , we have Φ∗ ∘ 𝑖∗ = id𝐻𝑝 (𝕜Φ∗ ) by
Problem 8.23. It thus remains to show that 𝑖∗ ∘ Φ∞ ∗ = id𝐻𝑝 (𝐾) . In order to
Φ
do this, we construct a function 𝐷 ∶ 𝕜Φ 𝑝 → 𝕜 𝑝−1 with the property that
id𝕜𝑐𝑝 −𝑖𝑝 ∘ Φ∞ = 𝜕 ∘ 𝐷 + 𝐷 ∘ 𝜕. If we can construct such a function, then
the result will follow since id𝐻𝑝 (𝐾) −𝑖∗ ∘ Φ∞∗ = 𝜕∗ ∘ 𝐷∗ + 𝐷∗ ∘ 𝜕∗ = 0.
Now there exists 𝑁 > 0 such that Φ∞ = Φ𝑁 by Theorem 8.19. Using the
algebraic fact that (1 − 𝑎𝑛 ) = (1 − 𝑎)(1 + 𝑎 + 𝑎2 + ⋯ + 𝑎𝑛−1 ), we have
id𝕜𝑐𝑝 −𝑖𝑝 ∘ Φ∞ = id𝕜𝑐𝑝 −Φ𝑁
= (id𝕜𝑐𝑝 −Φ)(id𝕜𝑐𝑝 +Φ + Φ2 + ⋯ + Φ𝑁−1 )
= (−𝜕 ∘ 𝑉 − 𝑉 ∘ 𝑉)(id𝕜𝑐𝑝 +Φ + Φ2 + ⋯ + Φ𝑁−1 )
= 𝜕[(−𝑉(id𝕜𝑐𝑝 +Φ + ⋯ + Φ𝑁−1 )]
+ [−𝑉(id𝕜𝑐𝑝 +Φ + ⋯ + Φ𝑁−1 )]𝜕.
Hence, define 𝐷 ∶= −𝑉(id𝕜𝑐𝑝 +Φ + Φ2 + ⋯ + Φ𝑁−1 ). This is the
desired result. □
8.4. Explicit formula for homology
Example 8.25. The last section was mathematically intense, so let’s get
back to a concrete example. Let us investigate the function Φ∞ ∶ 𝕜𝑐𝑝 →
𝕜Φ
𝑝 . Consider the discrete Morse function with gradient vector field 𝑉
200 8. Morse homology
given with arrows below:
𝑎6
𝑎2 𝑎3
𝑎1 𝑎4 𝑎5
Although tedious, it is not difficult to compute that
Φ∞ (𝑎1 ) = Φ∞ (𝑎2 ) = Φ∞ (𝑎3 ) = Φ∞ (𝑎4 ) = 𝑎4 ,
∞ ∞
Φ (𝑎5 ) = Φ (𝑎6 ) = 𝑎5 ,
∞
Φ (𝑎1 𝑎2 ) = 𝑎1 𝑎2 + 𝑎1 𝑎4
+ 𝑎2 𝑎3 + 𝑎3 𝑎4 ,
Φ∞ (𝑎3 𝑎5 ) = 𝑎3 𝑎5 + 𝑎3 𝑎4 ,
Φ∞ (𝑎4 𝑎5 ) = 𝑎4 𝑎5 ,
∞
Φ (𝑎3 𝑎6 ) = 𝑎3 𝑎6 + 𝑎6 𝑎5
+ 𝑎 3 𝑎4 ,
∞ ∞ ∞ ∞
Φ (𝑎1 𝑎4 ) = Φ (𝑎2 𝑎3 ) = Φ (𝑎3 𝑎4 ) = Φ (𝑎6 𝑎5 ) = 0.
Hence 𝕜Φ Φ
0 is the vector space generated by {𝑎4 , 𝑎5 } while 𝕜1 is the
vector space generated by {𝑎1 𝑎2 + 𝑎1 𝑎4 + 𝑎2 𝑎3 + 𝑎3 𝑎4 , 𝑎3 𝑎5 + 𝑎3 𝑎4 , 𝑎4 𝑎5 ,
𝑎3 𝑎6 + 𝑎6 𝑎5 + 𝑎3 𝑎4 }. It is not difficult to see that these elements are
linearly independent. More interestingly, consider that if we restrict the
simplices in 𝕜𝑐0 and 𝕜𝑐1 to the critical simplices in these vector spaces,
we obtain a 1–1 correspondence with the non-zero elements in 𝕜Φ 0 and
8.4. Explicit formula for homology 201
𝕜Φ
1 , respectively. That is, we have the correspondence
𝑎4 ↔ 𝑎4 ,
𝑎5 ↔ 𝑎5 ,
𝑎1 𝑎2 ↔ 𝑎1 𝑎2 + 𝑎1 𝑎4 + 𝑎2 𝑎3 + 𝑎3 𝑎4 ,
𝑎3 𝑎5 ↔ 𝑎3 𝑎5 + 𝑎3 𝑎4 ,
𝑎4 𝑎5 ↔ 𝑎4 𝑎5 ,
𝑎3 𝑎6 ↔ 𝑎3 𝑎6 + 𝑎6 𝑎5 + 𝑎3 𝑎4 .
Exercise 8.26. Verify the Φ∞ computations in Example 8.25.
In other words, if we restrict the function Φ∞ ∶ 𝕜𝑐𝑝 → 𝕜Φ 𝑝 to just
critical simplices, we obtain a bijection and furthermore a vector space
isomorphism (at least in the above example). This suggests that it might
be worth studying the following:
Definition 8.27. Let 𝐾 be a simplicial complex with gradient vector field
𝑉. Let ℳ𝑝 denote the vector space generated by the critical 𝑝-simplices
of 𝑉. Then ℳ𝑝 is a vector subspace of 𝕜𝑐𝑝 called the critical complex
of 𝐾 with respect to 𝑉.
The function Φ∞ ∶ 𝕜𝑐𝑝 → 𝕜Φ ∞
𝑝 restricts to the function Φ ∶ ℳ𝑝 →
𝕜Φ
𝑝,which we refer to by the same name with an abuse of notation. If
dim(ℳ𝑝 ) = 𝑖, we write ℳ𝑝𝑖 . As hinted above, we now have the following:
Theorem 8.28. The function Φ∞ ∶ ℳ𝑝 → 𝕜Φ
𝑝 is a vector space isomor-
phism.
Proof. We show that Φ∞ is both surjective and injective. To see that it
is surjective, let 𝑐 ∈ 𝕜∞𝑝 , and write 𝑐 = ∑𝜍∈𝐾𝑝 𝑎𝜍 𝜎 where 𝑎𝜍 ∈ 𝔽2 is
the coefficient of 𝜎 in the expansion of 𝑐. Consider the new element 𝑐 ̃
given by restricting 𝜎 to only critical simplices in the expansion of 𝑐; i.e.,
202 8. Morse homology
𝑐 ̃ ∶= ∑ 𝑎𝜍 𝜎. We claim that Φ∞ (𝑐)̃ = 𝑐. We have
𝜍 is critical
Φ∞ (𝑐)̃ = ∑ 𝑎𝜍 Φ∞ (𝜎)
𝜍 is critical
= ∑ 𝑎𝜍 (𝜎 + 𝑉𝑎𝜍 )
𝜍 is critical
= ∑ 𝑎𝜍 𝜎 = 𝑐,̃
𝜍 is critical
where 𝑉𝑎𝜍 is an element in the image of 𝑉. Since 𝜎 is critical and 𝑉𝑎𝜍 is
in the image of 𝑉, the expansion of 𝜎 as a combination of basis elements
will have none of the same basis elements as the expansion of 𝑉𝑎𝜍 as a
combination of basis elements. It follows that 𝑎𝜍 𝑉𝑎𝜍 = 0. Applying this
fact, one shows that Φ∞ (𝑐)̃ − 𝑐 ∈ 𝕜Φ (Problem 8.29). By Lemma 8.18,
Φ∞ (𝑐)̃ − 𝑐 = 0 so that Φ∞ is surjective.
To see that Φ∞ is injective, suppose that Φ∞ (𝑐) = 0 for 𝑐 ∈ ℳ𝑝 .
Then Φ∞ (𝑐) = ∑𝜍∈𝐾 𝑎𝜍 Φ∞ (𝜎) = 0, and using the same observation
𝑝
made in the proof of surjectivity, the coefficients of any critical simplex
in the expansion of 𝑐 are all 0, i.e., 𝑐 = 0. Thus Φ∞ is injective. □
Problem 8.29. Show that Φ∞ (𝑐)̃ − 𝑐 ∈ 𝕜Φ , where 𝑐 and 𝑐 ̃ are as defined
in the proof of Theorem 8.28.
Theorem 8.28 tells us that the vector spaces in the flow complex are
the same as the vector spaces generated by all the critical simplices. So,
for example, the simplicial complex 𝐾 in Example 8.25 would have the
following chain complex induced by its critical simplices:
ℳ14 / ℳ02 /0
where ℳ14 is the vector space generated by the four critical 1-simplices
𝑎1 𝑎2 , 𝑎3 𝑎5 , 𝑎4 𝑎5 , and 𝑎3 𝑎6 , while ℳ02 is generated by the two critical 0-
simplices 𝑎4 and 𝑎5 . Now we know from Theorem 8.24 that the ho-
mology of 𝐾 can be computed using the flow complex. Since the flow
complex is the same as the critical complex with critical simplices as the
basis, this raises the question: what are the boundary operators 𝜕𝑝 for
ℳ𝑝 ?
8.4. Explicit formula for homology 203
Example 8.30. Refer to the simplicial complex and gradient vector field
in Example 8.25. We seek to define a linear transformation from 𝕜4 to
𝕜2 ; that is, to each pair consisting of a critical 1-simplex and a critical
0-simplex we need to associate either a 0 or a 1. How can we do this?
Take, for example, 𝑎3 𝑎6 and 𝑎5 . We want to use the gradient vector field,
so starting at 𝑎3 𝑎6 , why not count the number of 𝑉-paths (mod 2) to 𝑎5 ?
In this case, there is just one. Let’s try it for another pair, 𝑎1 𝑎2 and 𝑎4 .
Here there are two such 𝑉-paths, starting at 𝑎1 𝑎2 and ending at 𝑎4 . Note
that we are using the definition of a 𝑉-path which allows us to start at
a critical simplex. We seem to want, then, to start at a maximal face of
𝑎1 𝑎2 and count the number of paths to 𝑎4 . If we continue in this way,
we obtain the linear transformation 𝜕 ∶ ℳ14 → ℳ02 given by
𝑎4 𝑎5
𝑎1 𝑎2 2 = 0 2=0
⎛ ⎞
𝑎 𝑎 1 1
𝜕= 4 5 ⎜ ⎟.
𝑎3 𝑎5 ⎜ 1 1 ⎟
𝑎3 𝑎6 ⎝ 1 1 ⎠
Sure enough, this has a rank of 1 and a nullity of 3 so that 𝑏1 (𝐾) = 3 and
𝑏0 (𝐾) = 2 − 1 = 1, precisely what we would expect (see Definition 3.16
for a reminder on how to compute homology).
We are thus led to the following theorem, which tells us that we
can compute the boundary operators of 𝜕𝑝 ∶ ℳ𝑝 → ℳ𝑝−1 by counting
𝑉-paths. Because it is a bit technical, we omit the proof here. The inter-
ested reader may find a proof in [65, Section 8] or [99, Section 7.4].
Theorem 8.31. Let 𝐾 be a simplicial complex and 𝑉 a gradient vector
field on 𝐾. For each 𝜎 ∈ ℳ𝑝 ,
𝜕(𝜎) = ∑ 𝛿𝜍,𝛽 𝛽
critical 𝛽 (𝑝−1)
where 𝛿𝜍,𝛽 = 0 if the number of 𝑉-paths starting from a maximal face
of 𝜎 to 𝛽 is even, and 𝛿𝜍,𝛽 = 1 if the number of 𝑉-paths is odd. Then
𝐻𝑝 (ℳ𝑝 ) ≅ 𝐻𝑝 (𝐾).
204 8. Morse homology
With an abuse of language, we will call the pair (ℳ∗ , 𝜕∗ ) the criti-
cal complex of 𝐾 with respect to the gradient vector field 𝑉.2 For
those of you keeping track, we have the “standard” chain complex in-
troduced in Section 3.2, the flow complex from Section 8.2, and now the
critical complex. All three yield the same homology for a fixed simplicial
complex 𝐾. Of course, one of the main advantages of using the critical
complex is that the matrices and vector spaces are much smaller! For
example, let us compute the homology of the simplicial complex in Ex-
ample 3.10. In order to do this, we need a discrete Morse function on the
simplicial complex, and of course the fewer critical simplices the better.
An optimal one is given below.
𝑣5 𝑣6 𝑣7
𝑣1 𝑣2 𝑣3 𝑣4
The set of critical 1- and 0-simplices are {𝑣2 𝑣5 , 𝑣3 𝑣6 , 𝑣3 𝑣7 } and {𝑣1 },
respectively (note that there are no critical 2-simplices). We thus have
ℳ13 / ℳ01 /0
as our critical chain complex. Counting the number of 𝑉-paths from a
maximal face of a critical 1-simplex to a critical 0-simplex yields
𝑣1
𝑣2 𝑣5 2 = 0
𝜕 = 𝑣3 𝑣6 ( 2 = 0 ).
𝑣3 𝑣7 2 = 0
2
Again, it is known as the Morse complex in certain parts of the literature, so be careful not to
confuse it with our object of study from Chapter 7.
8.5. Computation of Betti numbers 205
Clearly null(𝜕) = 3 so that 𝑏1 = 3 and 𝑏0 = 1. Now compare the
computation in this example with what we did in Example 3.10. Wasn’t
this much easier?
Problem 8.32. Use Theorem 8.31 to prove the strong discrete Morse
inequalities (Theorem 4.4).
Problem 8.33. Show that if one puts an empty gradient vector field on 𝐾
(i.e., everything is critical), then the formula for the boundary operator
in Theorem 8.31 coincides with the definition of the boundary operator
in Chapter 3.
8.5. Computation of Betti numbers
In this section, we will finally be able to distinguish between many of
the simplicial complexes posed to us in Section 1.1.1.
Example 8.34. Let us compute the Betti numbers of the Klein bottle 𝒦.
We put a gradient vector field on 𝒦, attempting to minimize the number
of critical simplices. The gradient vector field we will use is below.
𝑣0 𝑣1 𝑣2 𝑣0
𝑣3 𝑣4
𝑣5 𝑣6
𝑣8
𝑣4 𝑣3
𝑣7
𝑣0 𝑣1 𝑣2 𝑣0
206 8. Morse homology
The critical simplices are 𝑣0 , 𝑣0 𝑣2 , 𝑣0 𝑣3 , and 𝑣5 𝑣6 𝑣7 so that 𝑚0 = 1, 𝑚1 =
2, and 𝑚2 = 1. This yields the critical chain complex
𝜕2 𝜕1
ℳ21 / ℳ12 / ℳ01 / 0.
Applying Theorem 8.31, we see that
𝑣0 𝑣2 𝑣0 𝑣3
𝜕2 = 𝑣5 𝑣6 𝑣7 ( 0 0 )
and
𝑣0
𝑣0 𝑣2 2 = 0
𝜕1 = ( ).
𝑣0 𝑣3 2 = 0
Hence the 𝔽2 -Betti numbers of 𝐾 are 𝑏2 = 1, 𝑏1 = 2, and 𝑏0 = 1.
We thus conclude that neither 𝑆1 nor the Möbius band 𝑀 have the same
simple homotopy type as 𝐾, even though they have the same Euler char-
acteristic.
Problem 8.35. Use the discrete Morse function you constructed in Prob-
lem 2.41 (or a better one if you can find one) to show that the 𝔽2 -Betti
numbers of the torus 𝑇 2 are 𝑏2 = 1, 𝑏1 = 2, and 𝑏0 = 1.
Remark 8.36. In Problem 8.35, you showed that the torus 𝑇 2 has the
exact same 𝔽2 -Betti numbers as the Klein bottle from Example 8.34. Un-
fortunately, determining whether or not these two complexes have the
same simple homotopy type is beyond the scope of this book. It turns
out that 𝑇 2 ≁ 𝒦. We could have distinguished them using homology
with coefficients in ℤ, but we have chosen to trade precision for com-
putability. One can distinguish between these two complexes by using
the techniques in [134, Section 1.5], for example.
8.5. Computation of Betti numbers 207
Example 8.37. We compute the 𝔽2 -Betti numbers of the projective plane
𝑃 2 from Example 1.21. Consider the following gradient vector field on
𝑃2 :
𝑣1
𝑣3 𝑣2
𝑣5 𝑣6
𝑣4
𝑣2 𝑣3
𝑣1
We obtain the critical chain complex
𝜕2 𝜕1
ℳ21 / ℳ11 / ℳ01 / 0.
By Theorem 8.31, we count the number of paths from the boundary
of the critical 2-simplex 𝑣1 𝑣3 𝑣5 to the critical 1-simplex 𝑣1 𝑣4 , yielding the
boundary operator
𝑣1 𝑣4
𝜕2 = 𝑣1 𝑣3 𝑣5 ( 2 = 0 ).
Similarly, we have
𝑣1
𝜕1 = 𝑣1 𝑣4 ( 2 = 0 ).
Hence, the 𝔽2 -Betti numbers of 𝑃 2 are 𝑏0 = 𝑏1 = 𝑏2 = 1 and 𝑏𝑖 = 0 for
all 𝑖 > 2. In particular, even though 𝜒(𝑃2 ) = 1, 𝑃 2 ≁ Δ𝑛 .
208 8. Morse homology
Problem 8.38. Compute the Betti numbers of the simplicial complex
approximating the cell phone towers in Section 0.1.1.
Problem 8.39. Compute the Betti numbers of Björner’s complex in Ex-
ample 1.25.
Chapter 9
Computations with
discrete Morse theory
The purpose of this chapter is to present and briefly explain algorithms
in pseudo-code which may be implemented to perform computations
using discrete Morse theory. We assume the reader is familiar with some
of the basics of coding such as data structures.
9.1. Discrete Morse functions from point data
When we were first learning about discrete Morse functions, we saw in
Exercise 2.22 that if we have a positive function on the vertex set 𝑉 of a
simplicial complex 𝐾, then we could create a discrete Morse function on
𝐾 by declaring that the value on any simplex is the sum of the values of its
vertices. While this does yield a discrete Morse function, every simplex is
critical, which is not at all helpful. In this section we give an algorithm,
due to King et al. [97], that uses the values on the vertices to construct
a gradient vector field on 𝐾 with “few” critical simplices. More details
about this algorithm may be found in the original paper cited above or
[99, Section 8.3]. Before presenting the algorithm, we fix some notation
and terminology.
209
210 9. Computations with discrete Morse theory
Let 𝐾 be a simplicial complex, 𝑉 the vertex set of 𝐾, and 𝑓0 ∶ 𝑉 → ℝ
an injective function. For any 𝜎 ∈ 𝐾, write 𝜎 = 𝑣0 𝑣1 ⋯ 𝑣𝑖 . Define the
lower star filtration on 𝐾 induced by 𝑓0 by
maxf0 (𝜎) ∶= max {𝑓0 (𝑣𝑗 )}.
0≤𝑗≤𝑖
In Definition 6.33, we defined the link of a vertex 𝑣 ∈ 𝐾. The star of
𝑣 in 𝐾, denoted by star𝐾 (𝑣), is the simplicial complex induced by the set
of all simplices of 𝐾 containing 𝑣. The link of 𝑣 in 𝐾 is the set link𝐾 (𝑣) ∶=
star𝐾 (𝑣) − {𝜎 ∈ 𝐾 ∶ 𝑣 ∈ 𝜎}. We now define the lower link of 𝑣 to be the
maximal subcomplex of link𝐾 (𝑣) whose vertices have 𝑓0 -value less than
𝑓0 (𝑣). Note that the lower link depends on the chosen 𝑓0 function while
the link does not.
Example 9.1. We illustrate the above definitions. Let 𝐾 be the simplicial
complex below and 𝑓0 the function on the vertices, with values on the
other simplices induced by the lower star filtration.
3 4 6
4 𝑣
2 4 6 6
5 6
5 5 4 6 6
5 6
5 5 1 1
𝑢 𝑤 0
Write 𝑢 ∶= 𝑓0−1 (5), 𝑣 ∶= 𝑓0−1 (4), and 𝑤 ∶= 𝑓0−1 (0) where 𝑢, 𝑣, and 𝑤
are the unique vertices with those values. Then maxf0 (𝑢𝑣𝑤) = 5 while
9.1. Discrete Morse functions from point data 211
maxf0 (𝑣𝑤) = 4. The link of 𝑣 is given by
2 6
5
0
The lower link of 𝑣 is the maximal subcomplex of the above complex
with values less than 𝑓0 (𝑣) = 4, which is then seen to be
0
212 9. Computations with discrete Morse theory
Exercise 9.2. Let 𝐾 be the following simplicial complex and 𝑓0 ∶ 𝑉 → ℝ
the function on the vertices:
3 2
6 5
Compute the lower link of 𝑓0−1 (6) and of 𝑓0−1 (1).
Exercise 9.3. Let 𝑓0 ∶ 𝑉 → ℝ be a function. Suppose that 𝑓0 (𝑢) = 𝑎
where 𝑎 ∶= min𝑣∈𝑉 {𝑓0 (𝑣)}. Prove that the lower link of 𝑢 is empty.
We constructed the join of two simplicial complexes in Definition
1.52. Now we define the join of two simplices. Let 𝜎(𝑖) and 𝜏(𝑗) be two
disjoint simplices in 𝐾. The join of 𝜎 and 𝜏, denoted by 𝜎 ∗ 𝜏, is either
undefined or the (𝑖 +𝑗 +1)-simplex whose vertices are the union of those
of 𝜎 and those of 𝜏; that is, 𝜎 ∗ 𝜏 = 𝜎 ∪ 𝜏 ∈ 𝐾. The join is undefined when
𝜎 ∪ 𝜏 ∉ 𝐾. In Example 9.2, 𝑓0−1 (2) ∗ 𝑓0−1 (3) would be undefined.
Problem 9.4. Prove that the lower link of 𝑣 is also given by the set of all
simplices 𝜏 ∈ 𝐾 such that 𝑣 ∗ 𝜏 is defined and maxf0 (𝜏) < 𝑓0 (𝑣).
We now state our main algorithm, which produces a gradient vector
field from a set of values on the vertices. Note that it calls two other al-
gorithms, ExtractRaw and ExtractCancel, which we will define later
in this section.
9.1. Discrete Morse functions from point data 213
Algorithm 2 (King et al.) Extract
Input: Simplicial complex 𝐾, injective 𝑓0 ∶ 𝑉(𝐾) → ℝ, 𝑝 ≥ 0
Output: Gradient vector field (𝐴, 𝐵, 𝐶, 𝑟 ∶ 𝐵 → 𝐴) on 𝐾
1 ExtractRaw(𝐾, 𝑓0 ) (Algorithm 3)
2 for 𝑗 = 1, … , dim(𝐾) do
3 ExtractCancel(𝐾, ℎ, 𝑝, 𝑗) (Algorithm 5)
4 end for
Algorithm 2 takes in a simplicial complex 𝐾, an injective function
𝑓0 , and a parameter 𝑝 called the persistence. The output is a gradient
vector field on 𝐾. To see how to view a gradient vector field as (𝐴, 𝐵, 𝐶, 𝑟 ∶
𝐵 → 𝐴), recall from Section 2.2.2 that a gradient vector field on 𝐾 yields
a discrete Morse matching on the directed Hasse diagram of 𝐾. Those
simplices which are unmatched form a set 𝐶. They are precisely the crit-
ical simplices. For any matched pair (𝜎, 𝜏), we may break the pair up by
placing the tail 𝜎 in a set 𝐴 and the head 𝜏 in a set 𝐵. This induces a bijec-
tion 𝑟 ∶ 𝐵 → 𝐴. It follows immediately from Lemma 2.24 that {𝐴, 𝐵, 𝐶}
forms a partition of the simplices of 𝐾.
Exercise 9.5. Write down the sets 𝐴, 𝐵, and 𝐶 and the bijection 𝑟 ∶ 𝐵 →
𝐴 of the directed Hasse diagram in Example 7.1.
We will use Example 9.1 as a running example to illustrate Algo-
rithm 2. We will use the same 𝑓0 function on the vertices as in Example
9.1 and label the vertices according to their value under 𝑓0 , i.e.,
𝑣3
𝑣4
𝑣2 𝑣6
𝑣5 𝑣1
𝑣0
214 9. Computations with discrete Morse theory
This will be our input simplicial complex. Before we can illustrate the al-
gorithm, we need some more preliminaries. Clearly the bulk of the work
in Algorithm 2 is happening in two other algorithms (one of which calls
a third, but simple, algorithm). In order to understand these algorithms,
we define a subgraph 𝑅𝑖 , 𝑖 = 1, … , dim(𝐾), of the directed Hasse diagram
induced by a gradient vector field. For each 𝑖, let 𝐴𝑖 ∶= 𝐴 ∩ 𝐾𝑖 , 𝐵𝑖 ∶=
𝐵∩𝐾𝑖 , and 𝐶𝑖 ∶= 𝐶∩𝐾𝑖 where 𝐾𝑖 ∶= {𝜎 ∈ 𝐾 ∶ dim(𝜎) = 𝑖}. Vertices of 𝑅𝑖
are of two types: either (𝑖 − 1)-simplices in 𝐴𝑖−1 ∪ 𝐶𝑖−1 or the 𝑖-simplices
in 𝐵𝑖 ∪ 𝐶𝑖 . An edge is directed from 𝜎 to all of its codimension-1 faces,
unless 𝜎 ∈ 𝐵𝑖 , in which case the direction is from 𝑟(𝜎) to 𝜎.
Example 9.6. Let 𝐾 be the simplicial complex from Example 9.1 with
gradient vector field given below:
𝑣3
𝑣4
𝑣2 𝑣6
𝑣5 𝑣1
𝑣0
To save space, we write a simplex in the Hasse diagram ℋ𝐾 by concate-
nating its subscripts, e.g. 𝑖𝑗 stands for the simplex 𝑣𝑖 𝑣𝑗 . We have that ℋ𝑉
is given by
245 045 046 016
34 23 36 25 05 01
24 46 45 04 06 16
3 4 2 6 5 0 1
9.1. Discrete Morse functions from point data 215
where an unmarked edge implicitly has a downward arrow (suppressed
to avoid cluttering the picture). Now 𝐴0 = {𝑣3 , 𝑣4 , 𝑣2 , 𝑣6 , 𝑣1 }, 𝐵1 =
{𝑣2 𝑣3 , 𝑣0 𝑣4 , 𝑣2 𝑣5 , 𝑣3 𝑣6 , 𝑣0 𝑣1 }, 𝐶0 = {𝑣0 , 𝑣5 }, and 𝐶1 = {𝑣3 𝑣4 , 𝑣2 𝑣4 , 𝑣4 𝑣6 ,
𝑣0 𝑣6 }. Hence 𝑅1 is the subgraph
34 23 36 24 46 04 06 25 01
3 4 2 6 5 0 1
We also compute 𝐴1 = {𝑣4 𝑣5 , 𝑣1 𝑣6 , 𝑣0 𝑣5 }, 𝐵2 = {𝑣2 𝑣4 𝑣5 , 𝑣0 𝑣4 𝑣5 ,
𝑣0 𝑣1 𝑣6 }, 𝐶1 = {𝑣3 𝑣4 , 𝑣2 𝑣4 , 𝑣4 𝑣6 , 𝑣0 𝑣6 }, and 𝐶2 = {𝑣0 𝑣4 𝑣6 } so that 𝑅2 is
given by
245 045 046 016
05
34 24 46 45 06 16
Algorithm 3 below works inductively on the link of a vertex. A ver-
tex 𝑣 is chosen in Step 2 and its lower link computed. Then 𝑣 is de-
termined to be either critical or part of a regular pair. If it is critical, we
move on and another vertex is chosen. Otherwise, Step 8 passes this link
to Algorithm 2, which then passes it back to Algorithm 3. This gives a
gradient vector field on the link of 𝑣. From this information, Steps 9–15
determine a vector in the gradient vector field.
216 9. Computations with discrete Morse theory
Algorithm 3 (King et al.) ExtractRaw
Input: Simplicial complex 𝐾, injective 𝑓0 ∶ 𝑉(𝐾) → ℝ
Output: Gradient vector field (𝐴, 𝐵, 𝐶, 𝑟 ∶ 𝐵 → 𝐴) on 𝐾
1 Initialize 𝐴, 𝐵, 𝐶 to be empty
2 for all 𝑣 ∈ 𝐾0 do
3 Let 𝐾 ′ ∶= the lower link of 𝑣
4 if 𝐾 ′ = ∅ then add 𝑣 to 𝐶
5 else
6 Add 𝑣 to 𝐴
7 Let 𝑓0′ ∶ 𝐾0′ → ℝ be the restriction of 𝑓0
8 (𝐴′ , 𝐵 ′ , 𝐶 ′ , 𝑟′ ) ← Extract(𝐾 ′ , 𝑓0′ , ∞)
9 Find the 𝑤0 ∈ 𝐶0′ such that 𝑓0′ (𝑤0 ) is the smallest
10 Add 𝑣𝑤0 to 𝐵
11 Define 𝑟(𝑣𝑤0 ) ∶= 𝑣
12 for each 𝜎 ∈ 𝐶 ′ − {𝑤0 } add 𝑣 ∗ 𝜎 to 𝐶
13 for each 𝜎 ∈ 𝐵 ′ add 𝑣 ∗ 𝜎 to 𝐵
14 Add 𝑣 ∗ 𝑟′ (𝜎) to 𝐴
15 Define 𝑟(𝑣 ∗ 𝜎) = 𝑣 ∗ 𝑟′ (𝜎)
16 end if
17 end for
Example 9.7. We will illustrate the steps of Algorithm 3 on the sim-
plicial complex given immediately after Exercise 9.5. To begin, set 𝐴 =
𝐵 = 𝐶 = ∅. For Step 2, let 𝑣 ∶= 𝑣3 . Then the lower link of 𝑣 is given
by 𝐾 ′ = {𝑣2 }, so by Step 6 we have 𝐴 = {𝑣3 } and 𝑓0′ ∶ {𝑣2 } → ℝ given by
𝑓0′ (𝑣2 ) = 2. At this point we are on Step 8 of Algorithm 3, and we pass
(𝐾 ′ , 𝑓0′ , ∞) into Algorithm 2, which immediately sends us back to Algo-
rithm 3 but this time with a different input. Our only choice for a vertex
in Step 2 is 𝑣2 . We see that the lower link of 𝑣2 is empty, so that by Step
4 we have 𝐶 ′ = {𝑣2 } (note that this is the set 𝐶 ′ for 𝐾 ′ , not for 𝐾). Since
𝐾 ′ is a single point, we have completed Step 2, which in turn completes
this run of Algorithm 3 called in Step 1 of Algorithm 2. Moving to Step
2 of Algorithm 2, we observe that dim(𝐾) = 0, so Step 2 is skipped and
we complete this run of Algorithm 2 with output 𝐴′ = 𝐵 ′ = ∅, 𝐶 ′ = {𝑣2 },
and 𝑟′ = ∅. Recall that we are still on Step 8 of Algorithm 3, and now
9.1. Discrete Morse functions from point data 217
we have the output of Algorithm 2. There is only one element in 𝐶 ′ , so
𝑤0 ∶= 𝑣2 , and we add 𝑣3 𝑣2 ∈ 𝐵, defining 𝑟(𝑣3 𝑣2 ) ∶= 𝑣3 . In other words,
we have created an arrow (𝑣3 , 𝑣3 𝑣2 ) in a gradient vector field on 𝐾. Since
𝐶 ′ − {𝑤0 } = ∅, we skip Step 12, and since 𝐵 ′ = ∅, we also skip Steps
13–15. Moving back to Step 2, we choose a vertex in 𝐾 other than 𝑣2 and
repeat.
The ExtractRaw algorithm (Algorithm 3) takes in a simplicial com-
plex 𝐾 and injective function on the vertex set. It outputs a “raw” gradi-
ent vector field (given by 𝐴, 𝐵, 𝐶, 𝑟) in the sense that it may have many
critical simplices. One way to improve a given gradient vector field is
to cancel out superfluous critical simplices by the method of canceling
critical simplices described in Proposition 4.22. This will be done via
two more algorithms. Algorithm 4, or Cancel, reverses the arrows in a
unique gradient path between critical simplices 𝜏 and 𝜎. This algorithm
is utilized in Algorithm 5, and Algorithm 5 is utilized in our main algo-
rithm, Algorithm 2. Recall that we use 𝜎0 → 𝜎1 → ⋯ → 𝜎𝑛 to denote a
gradient path from 𝜎0 to 𝜎𝑛 . Note that our gradient paths below start at
a critical simplex.
Algorithm 4 (King et al.) Cancel
Input: Simplicial complex 𝐾, injective 𝑓0 ∶ 𝑉(𝐾) → ℝ,
𝜏 ∈ 𝐶𝑗−1 , 𝜎 ∈ 𝐶𝑗 , 1 ≤ 𝑗 ≤ dim(𝐾)
Output: Gradient vector field (𝐴, 𝐵, 𝐶, 𝑟 ∶ 𝐵 → 𝐴) on 𝐾
1 Find unique gradient path 𝜎 = 𝜎1 → 𝜏1 → 𝜎2 → 𝜏2 → ⋯ →
𝜏𝑘 = 𝜏
2 Delete 𝜏 and 𝜎 from 𝐶, add 𝜎 to 𝐵, and add 𝜏 to 𝐴
3 for 𝑖 = 1, … , 𝑘 do
4 Redefine 𝑟(𝜎𝑖 ) = 𝜏𝑖
5 end for
218 9. Computations with discrete Morse theory
Algorithm 5 (King et al.) ExtractCancel
Input: Simplicial complex 𝐾, injective 𝑓0 ∶ 𝑉(𝐾) → ℝ, 𝑝 ≥ 0,
1 ≤ 𝑗 ≤ dim(𝐾)
Output: Gradient vector field (𝐴, 𝐵, 𝐶, 𝑟 ∶ 𝐵 → 𝐴) on 𝐾
1 for all 𝜎 ∈ 𝐶𝑗 do
2 Find all gradient paths 𝜎 = 𝜎𝑖1 → 𝜎𝑖2 → ⋯ → 𝜎𝑖ℓ𝑖 ∈
𝐶𝑗−1 with maxf0 (𝜎𝑖ℓ𝑖 ) > maxf0 (𝜎) − 𝑝
3 for all 𝑖 do
4 if 𝜎𝑖ℓ𝑖 does not equal any other 𝜎𝑗ℓ𝑗 let 𝑚𝑖 ∶= maxf0 (𝜎𝑖ℓ𝑖 )
5 if at least one 𝑚𝑖 is defined then
6 Choose 𝑗 with 𝑚𝑗 = min{𝑚𝑖 }
7 Cancel(𝐾, 𝑓0 , 𝜎𝑗ℓ𝑗 , 𝜎, 𝑗)
8 end if
9 end for
10 end for
Example 9.8. To illustrate Algorithms 4 and 5, we again use the simpli-
cial complex 𝐾, 𝑓0 ∶ 𝐾0 → ℝ, and the gradient vector field below, along
with 𝑗 = 1, for our input to Algorithm 5.
𝑣3
𝑣4
𝑣2 𝑣6
𝑣5 𝑣1
𝑣0
We computed that 𝐶1 = {𝑣3 𝑣4 , 𝑣2 𝑣4 , 𝑣4 𝑣6 𝑣0 𝑣6 } above. For Step 1 of
Algorithm 5, let 𝜎 ∶= 𝑣3 𝑣4 . We have two gradient paths starting at 𝑣3 𝑣4
to elements of 𝐶0 , namely,
𝑣3 𝑣4 → 𝑣3 → 𝑣2 𝑣3 → 𝑣2 → 𝑣2 𝑣3 → 𝑣5
9.1. Discrete Morse functions from point data 219
and
𝑣3 𝑣4 → 𝑣4 → 𝑣4 𝑣0 → 𝑣0 .
Now maxf0 (𝑣5 ) = 5 in the first gradient path while maxf0 (𝑣0 ) = 0
for the second one. For 0 ≤ 𝑝 < 4, only the first gradient path satisfies
maxf0 (𝜎𝑖ℓ𝑖 ) > maxf0 (𝜎) − 𝑝. Since 𝜎1ℓ1 = 𝑣5 does not equal any other
𝜎𝑗ℓ𝑗 , we define 𝑚1 ∶= maxf0 (𝑣5 ) = 5. Moving to Step 6 of Algorithm 5,
we must choose 𝑗 = 1. Thus we pass (𝐾, 𝑓0 , 𝑣5 , 𝑣3 𝑣4 , 1) into Algorithm 4.
For Step 1, the unique gradient path is precisely
𝑣3 𝑣4 → 𝑣3 → 𝑣2 𝑣3 → 𝑣2 → 𝑣2 𝑣3 → 𝑣5 ,
which we had already found. Now delete 𝑣5 and 𝑣3 𝑣4 from 𝐶 and add
𝑣3 𝑣4 ∈ 𝐵 and 𝑣5 ∈ 𝐴. Finally, in Step 3 of Algorithm 4, we redefine
𝑟(𝑣3 𝑣4 ) ∶= 𝑣3 , 𝑟(𝑣2 𝑣3 ) ∶= 𝑣2 , and 𝑟(𝑣2 𝑣5 ) ∶= 𝑣5 . This has the effect of
reversing the path, yielding
𝑣3
𝑣4
𝑣2 𝑣6
𝑣5 𝑣1
𝑣0
which has one less critical simplex.
We now verify that our algorithms produce the desired output; that
is, we need to show that we produce a Morse matching on the directed
Hasse diagram. By Theorem 2.51, this is equivalent to showing that the
directed Hasse diagram has no directed cycles.
Proposition 9.9. The output (𝐴, 𝐵, 𝐶, 𝑟) produced by Algorithm 3 has
the property that there are no directed cycles in the resulting directed
Hasse diagram.
Proof. We first claim that the function maxf0 is non-increasing along
any directed path in the directed Hasse diagram. We first note that
220 9. Computations with discrete Morse theory
maxf0 (𝑟(𝜎)) = maxf0 (𝜎) since 𝑣 is the vertex in all mentioned simplices
with the highest value of 𝑓0 . Furthermore, for any face 𝜎 < 𝜏, we have
that maxf0 (𝜎) ≤ maxf0 (𝜏), which proves the claim that maxf0 is non-
increasing along a directed path. It follows that if 𝜎0 → 𝜎1 → ⋯ → 𝜎𝑘 =
𝜎0 is a directed cycle in the directed Hasse diagram, then maxf is con-
stant on all of 𝜎𝑖 . Let 𝑣 be the unique vertex such that 𝑓0 (𝑣) = maxf(𝜎𝑗 )
for all 𝑗. Since 𝑣 ∈ 𝜎𝑗 for all 𝑗, we have that 𝜎𝑗 = 𝑣 ∗ 𝜏𝑗 for some simplices
𝜏𝑗 in the lower link of 𝑣. This yields that 𝜏0 → 𝜏1 → ⋯ → 𝜏𝑘 = 𝜏0 is a
directed cycle in the directed Hasse diagram of the lower link of 𝑣. This
is a contradiction by induction on the dimension and Problem 9.10. □
Problem 9.10. Prove that Algorithm 4 does not produce directed cycles,
and thus the output of Algorithm 2 also does not contain directed cycles.
9.2. Iterated critical complexes
We will follow an algorithm due to P. Dłotko and H. Wagner to compute
the Betti numbers of a simplicial complex [52]. There are many other
algorithms to compute homology with discrete Morse theory [74, 82, 83,
104]. Other resources for computing persistent homology with discrete
Morse theory are given at the end of the chapter. Algorithms to compute
homology that do not rely on discrete Morse theory may be found in
[55, IV.2] and [54, Chapter 11], while a more advanced algorithm is given
in [127].
Let 𝐾 be a simplicial complex. We know from our work in Chap-
ter 8 that we can greatly reduce the size of our vector spaces in a chain
complex by finding a gradient vector field on 𝐾. The critical simplices
in each dimension generate vector spaces, so the fewer critical simplices
the better, and counting 𝑉-paths between pairs of critical simplices of
codimension 1 determines the boundary operator. This gives the criti-
cal complex constructed in Section 8.4. But sometimes this process does
not work as well as we would hope. An algorithm may produce an op-
timal discrete Morse function, a discrete Morse function where every
simplex is critical, or anything in between. One way to help address this
problem is to iteratively compute critical complexes, that is, compute the
critical complex of the critical complex of the critical complex, etc. If the
boundary operator eventually becomes 0, then we can simply read off
9.2. Iterated critical complexes 221
the Betti numbers from the vector space dimensions without having to
do any kind of matrix reductions or complicated counting of paths. The
idea is illustrated in the following example.
Example 9.11. Let 𝐾 be the simplicial complex along with the gradient
vector field 𝑉 given below:
𝑣4
𝑣3 𝑣2
𝑣0 𝑣1
Suppose we wish to compute the homology of 𝐾. Of course, we can
clearly see that 𝐾 is collapsible so that 𝑏0 (𝐾) = 1 and all its other Betti
numbers are 0. But we wish to illustrate the iterative critical complex
construction. The 𝑐-vector for this complex is 𝑐𝐾⃗ = (5, 6, 2). Hence if we
used the techniques of Chapter 3, we would obtain the chain complex
𝕜2 / 𝕜6 / 𝕜5 .
Although this gradient vector field is far from optimal, the number
of critical simplices in each dimension is 𝑚0 = 𝑚1 = 2 and 𝑚2 = 1.
Applying Theorem 8.31 yields the critical complex
𝕜1 / 𝕜2 / 𝕜2 ,
which is certainly an improvement over the first chain complex. But
we wish to reduce this even further. We can do this by pairing critical
simplices that have an odd number of 𝑉-paths between them. We will
draw the directed Hasse diagram of the critical simplices with an edge
between nodes if and only if there are an odd number of paths between
222 9. Computations with discrete Morse theory
them:
𝑣0 𝑣1 𝑣3
𝑣2 𝑣3 𝑣0 𝑣1
𝑣4 𝑣1
Can we find a discrete Morse matching on this directed Hasse dia-
gram? There are several. One is given below, where matched nodes are
connected by a thickened edge.
𝑣0 𝑣1 𝑣3
𝑣2 𝑣3 𝑣0 𝑣1
𝑣4 𝑣1
With 𝑣0 𝑣1 𝑣3 matched to 𝑣2 𝑣3 and 𝑣0 𝑣1 matched to 𝑣1 , the only sim-
plex left unmatched is 𝑣4 . In other words, a second iteration of the criti-
cal complex construction produces the critical complex
𝕜0 / 𝕜0 / 𝕜1 ,
which is the best we can do for this complex.
In Section 9.2.2, we will refer to the above structure as a
discrete Morse graph. The critical simplices in a fixed dimension may
9.2. Iterated critical complexes 223
be viewed as generating a vector space, while the gradient vector field
may be viewed as the boundary operator. Hence a discrete Morse graph
is also a critical complex. Strictly speaking, the difference between the
discrete Morse graph and the critical complex is that the critical com-
plex also contains boundary operator information. The notion of taking
the critical complex of a critical complex is made precise using algebraic
discrete Morse theory, which was discovered independently by Kozlov
[102], Sköldberg [142], and Joellenbeck and Welker [87]. Other treat-
ments of algebraic discrete Morse theory may be found in [99, Section
9.4] and [103, Section 11.3].1 We will make use of the boundary opera-
tor in Section 9.2.3 without discussing the details. The interested reader
may consult any one of the above sources for more details on algebraic
Morse theory.
9.2.1. Computing the upward Hasse diagram. Before we can do any
computing, we need a data structure to encode a simplicial complex.
Given a list of facets which generate a simplicial complex, we need to
store this as a Hasse diagram. We give a simple method to accomplish
this in Algorithm 6.2 A more refined but also more complicated algo-
rithm may be found in the paper by V. Kaibel et al. [96] and is dis-
cussed further in [90]. Alternatively, one may utilize free online soft-
ware equipped with packages to compute the Hasse diagram, such as
polymake [75].
The Hasse diagram we construct with our algorithm differs from the
Hasse diagram introduced in Section 2.2.2 in two respects. First, we will
include the empty set in the very bottom row of the Hasse diagram, along
with edges from the empty set to each node. Second, every edge in the
Hasse diagram will feature an upward arrow. Hence we will call the re-
sult of our algorithm the upward Hasse diagram of 𝐾. The algorithm
in Section 9.2.2 will then reverse some of these arrows, yielding a dis-
crete Morse matching.3 Freely downloadable files which contain facets
of many interesting and complicated simplicial complexes may be found
1
As mentioned before, note that these authors refer to the critical complex as the Morse complex.
2
Thanks to Pawel Dłotko for suggesting this algorithm along with Example 9.12.
3
This is opposite to the convention of computing the directed Hasse diagram in Section 2.2.2, but
the choice is arbitrary.
224 9. Computations with discrete Morse theory
at the Simplicial Complex Library [81] or the Library of Triangulations
[30].
The data structure for ℋ will be given by ℋ = (𝐺, 𝑉) where 𝐺 is the
set of vertices of ℋ and 𝑉 consists of ordered pairs (𝜎, 𝜏) of vertices in 𝐺,
indicating an arrow from 𝜎 to 𝜏.
Algorithm 6 Upward Hasse diagram ℋ
Input: List of facets 𝑆
Output: Upward Hasse diagram ℋ = (𝐺, 𝑉) of simplicial
complex 𝐾 generated by 𝑆
1 Initialize ℋ = 𝐺 = 𝑉 = ∅
2 for every ∅ ≠ 𝜎 ∈ 𝑆
3 if 𝜎 ∉ 𝐺
4 𝐺 ← 𝐺 ∪ {𝜎}
5 end if
6 for every vertex 𝑣 ∈ 𝜎
7 𝜎′ ∶= 𝜎 − {𝑣}
8 if 𝜎′ ∉ 𝐺
9 𝐺 ← 𝐺 ∪ {𝜎′ } and 𝑆 ← 𝑆 ∪ {𝜎′ }
10 else replace 𝜎′ with the instance of 𝜎′ already in 𝐺
11 𝑉 ← 𝑉 ∪ {(𝜎′ , 𝜎)}
12 end if
13 end for
14 end for
15 Return ℋ = (𝐺, 𝑉)
Example 9.12. Let 𝐾 = Δ2 on vertices 𝑎, 𝑏, and 𝑐. Then 𝑆 = {𝑎𝑏𝑐}; i.e.,
⟨𝑎𝑏𝑐⟩ = Δ2 . We illustrate Algorithm 6. For Step 2, 𝜎 = 𝑎𝑏𝑐 ∈ 𝑆 is the
only option, and since 𝐺 = ∅, we add 𝑎𝑏𝑐 to 𝐺 in Step 4. For Step 6 pick
𝑣 = 𝑎, and construct 𝜎′ ∶= 𝑎𝑏𝑐 − {𝑎} = 𝑏𝑐 in Step 7. Now 𝑏𝑐 ∉ 𝐺, so we
add 𝑏𝑐 ∈ 𝐺 and 𝑏𝑐 ∈ 𝑆 in Step 9. For Step 11, we add the directed edge
(𝑏𝑐, 𝑎𝑏𝑐) ∈ 𝑉. Moving back to Step 6, we repeat with both 𝜎′ = 𝑏 and
𝜎′ = 𝑐. After coming out of the first iteration of Step 2 with 𝜎 = 𝑎𝑏𝑐, we
9.2. Iterated critical complexes 225
have constructed
𝑎𝑏𝑐
𝑎𝑏 𝑎𝑐 𝑏𝑐
with 𝑆 = {𝑎𝑏𝑐, 𝑎𝑏, 𝑎𝑐, 𝑏𝑐}. Now in Step 2, pick 𝜎 = 𝑏𝑐. We see that
𝑏𝑐 ∈ 𝐺, so we skip to Step 6. As before, we run through 𝑣 = 𝑏 and 𝑣 = 𝑐
to obtain
𝑎𝑏𝑐
𝑎𝑏 𝑎𝑐 𝑏𝑐
𝑏 𝑐
with 𝑆 = {𝑎𝑏𝑐, 𝑎𝑏, 𝑎𝑐, 𝑏𝑐, 𝑏, 𝑐}. Going back to Step 2 and choosing 𝜎 = 𝑎𝑐,
consider 𝜎′ = 𝑎𝑐 − {𝑎} = 𝑐 in Step 7. Since 𝑐 ∈ 𝐺 already, we must
view this 𝑐 as the same 𝑐 already in 𝐺 via Step 10. In particular, there
is a directed edge (𝑐, 𝑏𝑐) ∈ 𝑉. Next we add the directed edge (𝑐, 𝑎𝑐) in
226 9. Computations with discrete Morse theory
Step 11. After we consider 𝜎′ = 𝑎, we have
𝑎𝑏𝑐
𝑎𝑏 𝑎𝑐 𝑏𝑐
𝑎 𝑏 𝑐
with 𝑆 = {𝑎𝑏𝑐, 𝑎𝑏, 𝑎𝑐, 𝑏𝑐, 𝑏, 𝑐, 𝑎}. Returning to Step 2 with 𝜎 = 𝑎𝑏, we see
that no new nodes are created, but we do obtain two new directed edges
(𝑎, 𝑎𝑏) and (𝑏, 𝑎𝑏). This yields
𝑎𝑏𝑐
𝑎𝑏 𝑎𝑐 𝑏𝑐
𝑎 𝑏 𝑐
9.2. Iterated critical complexes 227
with the same 𝑆 as before. Now with 𝜎 = 𝑎 we construct the empty set,
adding the directed edge from ∅ to 𝑎:
𝑎𝑏𝑐
𝑎𝑏 𝑎𝑐 𝑏𝑐
𝑎 𝑏 𝑐
After running through 𝑏 and 𝑐 in Step 2, we have run through all non-
empty 𝜎 ∈ 𝑆, yielding the directed Hasse diagram
𝑎𝑏𝑐
𝑎𝑏 𝑎𝑐 𝑏𝑐
𝑎 𝑏 𝑐
∅
228 9. Computations with discrete Morse theory
9.2.2. Computing a discrete Morse graph. Suppose we have con-
structed the upward Hasse diagram ℋ. We give an algorithm to con-
struct a discrete Morse matching 𝑀. In fact, we can think of the upward
Hasse diagram as already communicating a discrete Morse matching—
namely, the trivial one where there are no pairings so that every simplex
is critical.
Strictly speaking, a discrete Morse matching does not contain infor-
mation about the critical simplices even though the critical simplices are
the ones which are not matched. Hence, given a discrete Morse match-
ing 𝑀 on a directed Hasse diagram ℋ, we define the discrete Morse
graph 𝐺 to be the directed Hasse diagram ℋ𝑀 along with the matching
𝑀 and set of critical simplices 𝐶. In the case of the upward Hasse dia-
gram, 𝑀 = ∅ while 𝐶 consists of all nodes of ℋ. We write 𝐺 = (𝑀, 𝐶),
viewing 𝐺 as a directed graph along with matching and critical simplex
information. Note that there is no substantive difference between ℋ𝑀
and 𝐺. Rather, as mentioned just before Section 9.2.1, the technical dis-
tinction is that 𝐺 explicitly contains the information 𝑀 and 𝐶, while this
information can be deduced from ℋ𝑀 .
Recall that if 𝜎 ∈ 𝐾 is a simplex, the node corresponding to 𝜎 in ℋ
is also called 𝜎.
Algorithm 7 (Dłotko and Wagner) Discrete Morse graph
Input: Discrete Morse graph 𝐺 = (𝑀, 𝐶)
Output: Discrete Morse graph 𝐺 = (𝑀, 𝐶)
1 while there exists unmatched element 𝛼 ∈ 𝐶 with unique un-
matched element 𝛽 ∈ 𝜕(𝛼) do
2 Match 𝛼 and 𝛽; i.e., reverse arrow on edge 𝛼𝛽
to be upward.
9.2. Iterated critical complexes 229
The output discrete Morse graph will have at most as many critical
simplices as the input discrete Morse graph, if not fewer. Algorithm 7
is similar to Algorithm 4 from Section 9.1 in the sense that both algo-
rithms attempt to cancel critical simplices. However, it should be noted
that Joswig and Pfetsch have shown that the number of critical cells is
NP-complete and MAXSNP-hard [91]. In other words, there is no fully
polynomial-time algorithm that minimizes the number of critical sim-
plices.
9.2.3. Computing the boundary operator. Let 𝐺 = (𝑀, 𝐶) be a dis-
crete Morse graph. For any nodes 𝑠, 𝑡 ∈ 𝐺, let 𝑃𝑠 (𝑡) be the number of
distinct directed paths from node 𝑠 to node 𝑡 and let prev(𝑣) ∶= {𝑥 ∶
𝑥𝑣 is an edge in 𝐺}. We then have the recurrence relation
1 for 𝑢 = 𝑠,
𝑃𝑠 (𝑢) ∶= {
∑𝑣∈prev(ᵆ) 𝑃𝑠 (𝑣) for 𝑢 ≠ 𝑠.
A topological sorting of a directed acyclic graph 𝐺 (e.g. a discrete
Morse graph or directed Hasse diagram) is a total ordering ≺ of the ver-
tices such that for every directed edge 𝑢𝑣 from vertex 𝑢 to vertex 𝑣, we
have that 𝑢 ≺ 𝑣. Several well-known algorithms exist to topologically
sort the vertices of a directed graph. See for instance Kahn’s algorithm
[94] or depth-first search in [46, Section 22.4].
Recall from Section 3.2 that we compute homology over 𝔽2 ∶= {0, 1}
so that 1 + 1 ≡ 0 mod 2.
230 9. Computations with discrete Morse theory
Algorithm 8 (Dłotko and Wagner) Boundary operator
Input: Discrete Morse graph 𝐺 = (𝑀, 𝐶)
Output: Boundary operator 𝜕
1 Topologically sort the vertices of 𝐺
2 for each critical vertex 𝑠 ∈ 𝐺 do
3 Assign 𝑃𝑠 (𝑣) ∶= 0 for each vertex 𝑣 ≠ 𝑠
4 Assign 𝑃𝑠 (𝑠) ∶= 1
5 for each vertex 𝑐 following 𝑠 in topological order do
6 if 𝑐 is critical then
7 𝜕(𝑠, 𝑐) ∶= 𝑃𝑠 (𝑐) mod 2
8 else
9 for each 𝑣 such that 𝑐𝑣 is an edge do
10 𝑃𝑠 (𝑣)+ = 𝑃𝑠 (𝑐)
Algorithm 8 works in 𝑂(|𝐶|(𝑉 + 𝐸)) time, where 𝑉 is the number of
vertices and 𝐸 the number of edges of 𝐺. A proof that the algorithm is
correct may be found in [52, Theorem 5.1].
9.2.4. Computing homology with the iterated critical complex. At
this point, we have assembled all the necessary ingredients, and now it
is simply a matter of piecing them together. After building the upward
Hasse diagram (viewed as the trivial discrete Morse graph on 𝐾) of a sim-
plicial complex 𝐾, we compute the critical complex of 𝐾; in other words,
we build a better discrete Morse graph on 𝐾. Given this critical com-
plex, we build a new critical complex, one with hopefully fewer critical
simplices. From this new critical complex, we build yet another criti-
cal complex, etc. This process of building a new critical complex from
an existing critical complex is known as an iterated discrete Morse
decomposition. It is guaranteed that at each iteration we will always
obtain a critical complex with fewer or the same number of critical sim-
plices. Furthermore, we are guaranteed that this process will eventually
stabilize by returning the same critical complex as the previous input af-
ter a finite number of steps. Once this happens, the iteration breaks and
the Betti numbers are computed. See [52, Section 6] for details.
9.2. Iterated critical complexes 231
Algorithm 9 (Dłotko and Wagner) Homology via iterated discrete
Morse decomposition
Input: Simplicial complex 𝐾
Output: Betti numbers 𝛽𝑖 (𝐾)
1 Compute upward Hasse diagram ℋ of 𝐾
2 ℋ =∶ 𝐺, trivial critical complex
3 while true do
4 𝒞 ∶= build critical complex of 𝐺
5 𝐺 ∶= (𝑀, 𝐶) (Algorithm 7)
6 𝜕 ∶= compute boundary operator (Algorithm 8)
7 if 𝒞 = 𝐺 then
8 break
𝐺 ∶= 𝒞
9 for 𝑖 ∶= 0 to 𝑑 do
10 𝛽𝑖 ∶= number of 𝑖-dimensional simplices of 𝐺
In the same paper introducing Algorithm 9, the authors give an algo-
rithm to compute persistent homology using the iterated discrete Morse
decomposition approach [52, Section 7-8]. Other algorithms that utilize
discrete Morse theory in the service of persistent homology are found in
[38, 86, 98, 111, 119].
Chapter 10
Strong discrete Morse
theory
This final chapter discusses another kind of discrete Morse theory and,
along the way, spends some effort developing other types of discrete
topology that one can pursue. In that sense, this chapter shows how
discrete Morse theory may act as a catalyst in exposing one to other
kinds of topology. The chapter reflects the personal taste of the author.
One could pursue many other kinds of topology as well as other kinds
of mathematics using discrete Morse theory. See the preface for a brief
survey of several such directions.
10.1. Strong homotopy
10.1.1. Simplicial maps. When encountering a new area of mathe-
matics, one should ask the question “Where are the functions?” Func-
tions are how we study structure in modern mathematics. They carry
the structure of one object into the structure of another object. For ex-
ample, in linear algebra, the structure of a vector space 𝑉 is carried into
the structure of a vector space 𝑊 by a function which is a linear trans-
formation. The properties of a linear transformation guarantee that the
vector space structure is preserved. In algebra, group, ring, and field
homomorphisms are functions respecting the structure of groups, rings,
233
234 10. Strong discrete Morse theory
and fields, respectively. In all cases, each kind of function must satisfy
certain properties to preserve the structure of the object.
What is the right notion of a function between simplicial complexes?
We desire a function 𝑓 ∶ 𝐾 → 𝐿 between simplicial complexes that pre-
serves the simplicial structure.
Definition 10.1. A simplicial function or simplicial map 𝑓 ∶ 𝐾 →
𝐿 is a function 𝑓𝑉 ∶ 𝑉(𝐾) → 𝑉(𝐿) on the vertex sets of 𝐾 and 𝐿 with
the property that if 𝜎 = 𝑣𝑖0 𝑣𝑖1 ⋯ 𝑣𝑖𝑚 is a simplex in 𝐾, then 𝑓(𝜎) ∶=
𝑓𝑉 (𝑣𝑖0 )𝑓𝑉 (𝑣𝑖1 ) ⋯ 𝑓𝑉 (𝑣𝑖𝑚 ) is a simplex in 𝐿.
In other words, a simplicial map is one induced by a map on the
vertices that takes simplices to simplices; i.e., it preserves the simplicial
structure, even if it may take a larger simplex to a smaller simplex. The
simplicial structure is preserved in the sense that if 𝛼 ⊆ 𝛽, then 𝑓(𝛼) ⊆
𝑓(𝛽).
Problem 10.2. Let 𝑓 ∶ 𝐾 → 𝐿 be a simplicial map. Prove that if 𝛼 ⊆ 𝛽,
then 𝑓(𝛼) ⊆ 𝑓(𝛽).
Exercise 10.3. Determine whether the following functions are simpli-
cial maps. If so, prove it. If not, show why not.
(i) For any simplicial complexes 𝐾 and 𝐿 and a fixed vertex 𝑢 ∈ 𝐿,
𝑓 ∶ 𝐾 → 𝐿 defined by 𝑓(𝑣𝑖 ) = 𝑢 for every 𝑣𝑖 ∈ 𝑉(𝐾).
(ii) For a subcomplex 𝑈 ⊆ 𝐾, the inclusion 𝑖𝑈 ∶ 𝑈 → 𝐾 defined by
𝑖𝑈 (𝑣) = 𝑣.
(iii) For 𝐾 = ⟨𝑎𝑏𝑐⟩ and 𝐿 = ⟨𝑎𝑏, 𝑏𝑐⟩, 𝑓 ∶ 𝐾 → 𝐿 defined by 𝑓(𝑥) = 𝑥
for every 𝑥 ∈ 𝐾.
Hopefully, you showed that the very last example in Exercise 10.3 is
not a simplicial map. You might realize that this function corresponds to
the elementary collapse obtained by removing the free pair {𝑎𝑐, 𝑎𝑏𝑐}. In
fact, an elementary collapse is almost never a simplicial map (see Prob-
lem 10.4).
Problem 10.4. Let {𝜎(𝑝) , 𝜏(𝑝+1) } be a free pair for a simplicial complex
𝐾 𝑛 , 0 ≤ 𝑝 ≤ 𝑛 − 1. Determine with proof when the induced function
𝑓 ∶ 𝐾 → 𝐾 − {𝜎, 𝜏} is a simplicial map. For every vertex 𝑣 ∈ 𝐾, the
10.1. Strong homotopy 235
induced function 𝑓 is defined by 𝑓(𝑣) = 𝑣 when 𝑝 > 0 and 𝑓(𝜎) = 𝜏
with all other vertices being sent to themselves when 𝑝 = 0.
The prospect of combining discrete Morse theory with simplicial
maps thus seems bleak. However, if we start with an elementary col-
lapse and continue collapsing, as in the following example, we do notice
something.
𝑏 𝑏
⟶ ⟶
𝑑 𝑎 𝑐 𝑑 𝑎 𝑐 𝑑 𝑎 𝑐
While the initial collapse is not a simplicial map, the composition of
the two elementary collapses, that is, the composition defined by 𝑓(𝑎) =
𝑓(𝑏) = 𝑎 and 𝑓(𝑐) = 𝑐, is a simplicial map.
Exercise 10.5. Check that the above composition is a simplicial map.
We thus desire to find a condition under which a composition of
elementary collapses corresponds to a simplicial map. This will be the
goal of the next subsection.
10.1.2. Dominating vertices. In the previous subsection, we found a
sequence of elementary collapses that induced a simplicial map. The
property that allowed us to do this concerned the relationship between
the vertices 𝑎 and 𝑏 in our example. More generally, we define the fol-
lowing:
Definition 10.6. Let 𝐾 be a simplicial complex. A vertex 𝑣 is said to
dominate 𝑣′ (it is also said that 𝑣′ is dominated by 𝑣) if every maximal
simplex (facet) of 𝑣′ also contains 𝑣.
The idea behind this definition is that 𝑣′ “can’t get away from” 𝑣.
236 10. Strong discrete Morse theory
Example 10.7. Borrowing an example from [63], we have the following
simplicial map where 𝑣′ is sent to 𝑣 and all other vertices are sent to
themselves.
𝑣′
𝑣 𝑣
𝑤 𝑤
Note here that 𝑣 dominates 𝑣′ . In general, we may ask of any pair of
vertices “does 𝑎 dominate 𝑏?” We see that 𝑣′ does not dominate 𝑣 since
at least one facet of 𝑣 does not contain 𝑣′ . Similarly, 𝑣 does not dominate
𝑤, nor does 𝑤 dominate 𝑣.
Exercise 10.8. Is it possible to have a pair of vertices 𝑢 and 𝑣 such that
𝑣 dominates 𝑢 and 𝑢 dominates 𝑣?
For any vertex 𝑣 ∈ 𝐾, let 𝐾 − {𝑣} ∶= {𝜎 ∈ 𝐾 ∶ 𝑣 ∉ 𝜎}, i.e., the largest
subcomplex of 𝐾 not containing 𝑣. The key observation is that almost by
definition, a dominated vertex corresponds to a simplicial map. Many of
the results in this subsection are due to J. Barmak [21].
Proposition 10.9. Let 𝐾 be a simplicial complex, and suppose that 𝑣
dominates 𝑣′ . Then the function 𝑟 ∶ 𝐾 → 𝐾 − {𝑣′ } defined by 𝑟(𝑥) = 𝑥
for all 𝑥 ≠ 𝑣′ and 𝑟(𝑣′ ) = 𝑣 is a simplicial map called an elementary
strong collapse. Moreover, an elementary strong collapse is a sequence
of elementary (standard) collapses.
The simplicial map 𝑟 in this proposition is called a retraction.
Proof. That 𝑟 is a simplicial map is Problem 10.10. To show that an ele-
mentary strong collapse is a sequence of elementary collapses, suppose
𝑣 dominates 𝑣′ and let 𝜎1 , … , 𝜎𝑘 be the facets of 𝑣′ . Then 𝑣 ∈ 𝜎𝑖 for all
𝑖. Write 𝜎𝑖 = 𝑣𝑣′ 𝑣𝑖2 ⋯ 𝑣𝑖𝑛 . We claim that {𝜎𝑖 , 𝜏𝑖 } is a free pair, where
𝜏𝑖 ∶= 𝜎𝑖 − {𝑣}. Clearly 𝜏𝑖 ⊆ 𝜎𝑖 . If 𝜏𝑖 is the coface of another simplex 𝛽,
then 𝑣′ ∈ 𝛽 and 𝑣 ∉ 𝛽 since 𝜎𝑖 is maximal. Thus {𝜎𝑖 , 𝜏𝑖 } is a free pair, and
10.1. Strong homotopy 237
it may be removed. Do this over all facets of 𝑣′ to see that the elementary
strong collapse is a sequence of elementary collapses. □
Problem 10.10. Prove that 𝑟 in the statement of Proposition 10.9 is a
simplicial map.
Exercise 10.11. According to Proposition 10.9, every elementary strong
collapse is a sequence of standard collapses. For a fixed elementary
strong collapse, is such a sequence unique up to choice of collapses?
Formally, we now make the following definition.
Definition 10.12. If 𝑣 dominates 𝑣′ , then the removal of 𝑣′ from 𝐾
(without necessarily referencing the induced map) is called an elemen-
tary strong collapse and is denoted by 𝐾 ↘↘ 𝐾 − {𝑣′ }. The addition
of a dominated vertex is an elementary strong expansion and is de-
noted by ↗↗. A sequence of elementary strong collapses or elementary
strong expansions is also called a strong collapse or strong expansion,
respectively, and also denoted by ↗↗ or ↘↘, respectively. If there is a
sequence of strong collapses and expansions from 𝐾 into 𝐿, then 𝐾 and
𝐿 are said to have the same strong homotopy type. In particular, if
𝐿 = ∗, then 𝐾 is said to have the strong homotopy type of a point. If
there is a sequence of elementary strong collapses from 𝐾 to a point, 𝐾
is called strongly collapsible.
The reader will immediately recognize the similarity to simple ho-
motopy type from Section 1.2.
Example 10.13. The above definition immediately raises the question
of whether strongly collapsible is the same as collapsible. Clearly strongly
collapsible implies collapsible, but the following example shows that the
converse is false.
238 10. Strong discrete Morse theory
Because this simplicial complex was first used in this context by Bar-
mak and Minian [23], it is affectionately referred to as the Argentinian
complex. This simplicial complex is clearly collapsible, but it is not
strongly collapsible since there is no dominating vertex. This latter fact
can be observed by brute force. Thus collapsible does not imply strongly
collapsible.
Remark 10.14. At this point, it is still theoretically possible that the
Argentinian complex could have the strong homotopy type of a point.
As we mentioned in Example 1.67, it is possible to construct collapsi-
ble simplicial complexes for which one can get stuck while performing
a series of collapses. In other words, just because a simplicial complex
is collapsible does not mean any sequence of collapses will bring you
down to a point. Perhaps we could perform several strong expansions
followed by a clever choice of strong collapses to show that the Argen-
tinian complex has the strong homotopy type of a point. The advantage
of working with strong collapses and expansions is that these kinds of
bizarre pathologies do not occur. We will prove this as a special case in
Corollary 10.32 in Section 10.1.3.
10.1.3. Contiguity. We seemingly shift gears now but will make the
connection with the previous section at the end.
Definition 10.15. Let 𝑓, 𝑔 ∶ 𝐾 → 𝐿 be maps between simplicial com-
plexes. Then 𝑓 and 𝑔 are said to be contiguous, denoted by 𝑓 ∼𝑐 𝑔, if
for every simplex 𝜎 ∈ 𝐾 we have that 𝑓(𝜎) ∪ 𝑔(𝜎) is a simplex of 𝐿.
Contiguity almost forms an equivalence relation on the set of sim-
plicial functions from 𝐾 to 𝐿, but not quite.
Problem 10.16. Show that contiguity is reflexive and symmetric. Give
a counterexample to show that it is not in general transitive.
Like the relation ≺ from Definition 5.33, we need to force transitivity
by taking the transitive closure.
10.1. Strong homotopy 239
Definition 10.17. We say that simplicial maps 𝑓, 𝑔 ∶ 𝐾 → 𝐿 are in the
same contiguity class or strongly homotopic, denoted by 𝑓 ∼ 𝑔, if
there is a sequence
𝑓 = 𝑓0 ∼𝑐 𝑓1 ∼𝑐 ⋯ ∼𝑐 𝑓𝑛−1 ∼𝑐 𝑓𝑛 = 𝑔
of contiguous maps joining 𝑓 and 𝑔.
Combining this with Problem 10.16, we immediately obtain the fol-
lowing:
Proposition 10.18. Being strongly homotopic is an equivalence relation
on simplicial maps from 𝐾 to 𝐿.
Because we are now working with equivalence classes of simplicial
maps (i.e, not individual simplicial maps), we may investigate the equiv-
alence class of special simplicial maps. Call a simplicial complex 𝐾 min-
imal if it contains no dominating vertices.
Proposition 10.19. Let 𝐾 be minimal, and suppose that 𝜙 ∶ 𝐾 → 𝐾 is a
simplicial map satisfying 𝜙 ∼ id𝐾 . Then 𝜙 = id𝐾 .
Proof. Let 𝜙 ∼ id𝐾 . We need to show that 𝜙(𝑣) = 𝑣 for every 𝑣 ∈ 𝐾. Let
𝜎 be a facet containing 𝑣. Then 𝜙(𝜎) ∪ 𝜎 is a simplex of 𝐾, and since 𝜎 is a
facet, we have 𝜙(𝑣) ∈ 𝜙(𝜎) ∪ 𝜎 = 𝜎. Hence every facet containing 𝑣 also
contains 𝜙(𝑣). But 𝐾 has no dominating vertices by hypothesis. Hence
𝜙(𝑣) = 𝑣. □
Definition 10.20. Let ∗ ∶ 𝐾 → 𝐿 be any map defined by ∗(𝑣) = 𝑢 for all
𝑣 ∈ 𝐾 and a fixed 𝑢 ∈ 𝐿. If 𝑓 ∼ ∗, we say that 𝑓 is null-homotopic.
Using the generic ∗ to denote the simplicial map sending everything
to a single point is justified by the following:
Proposition 10.21. Suppose 𝐾 and 𝐿 are (connected) simplicial com-
plexes. Let 𝑢, 𝑢′ ∈ 𝐿 be two fixed vertices, and define 𝑢, 𝑢′ ∶ 𝐾 → 𝐿 by
𝑢(𝑣) = 𝑢 and 𝑢′ (𝑣) = 𝑢′ . Then 𝑢 ∼ 𝑢′ .
Problem 10.22. Prove Proposition 10.21.
By Proposition 10.21, the choice of vertex is irrelevant, so we simply
write ∗ for the map sending each vertex to a fixed vertex.
240 10. Strong discrete Morse theory
Definition 10.23. Let 𝑓 ∶ 𝐾 → 𝐿 be a simplicial map. Then 𝑓 is a strong
homotopy equivalence if there exists 𝑔 ∶ 𝐿 → 𝐾 such that 𝑔 ∘ 𝑓 ∼ id𝐾
and 𝑓 ∘ 𝑔 ∼ id𝐿 . If 𝑓 ∶ 𝐾 → 𝐿 is a strong homotopy equivalence, we write
𝐾 ≈ 𝐿.
Problem 10.24. Prove that 𝐾 ≈ ∗ if and only if id𝐾 is null-homotopic.
What is the relationship between strong homotopy equivalence and
having the same strong homotopy type? As you may have guessed, these
are two sides of the same coin. First, a lemma.
Lemma 10.25. If 𝑓 ∶ 𝐾 → 𝐿 is a strong homotopy equivalence be-
tween minimal simplicial complexes, then there exists a simplicial map
𝑔 ∶ 𝐿 → 𝐾 such that 𝑔 ∘ 𝑓 = id𝐾 and 𝑓 ∘ 𝑔 = id𝐿 .
Problem 10.26. Prove Lemma 10.25. [Hint: Use Proposition 10.19.]
Any 𝑓 ∶ 𝐾 → 𝐿 that satisfies the conditions found in the conclusion
of Lemma 10.25 is called a simplicial complex isomorphism. If 𝑓 is a
simplicial complex isomorphism, we say that 𝐾 and 𝐿 are isomorphic.
As we know, a strong collapse may be viewed as a simplicial map. It
is not difficult to imagine that such a map is a strong homotopy equiva-
lence. If so, we may view a sequence of strong collapses as a sequence of
strong homotopy equivalences, which itself is a strong homotopy equiv-
alence.
Proposition 10.27. Let 𝐾 be a simplicial complex and 𝑣′ a vertex domi-
nated by 𝑣. Then the inclusion map 𝑖 ∶ 𝐾−{𝑣′ } → 𝐾 is a strong homotopy
equivalence. In particular, if 𝐾 and 𝐿 have the same strong homotopy
type, then 𝐾 ≈ 𝐿.
Proof. Define 𝑟 ∶ 𝐾 → 𝐾−{𝑣′ } to be the retraction from Proposition 10.9.
If 𝜎 ∈ 𝐾 is a simplex without 𝑣′ , then clearly 𝑖𝑟(𝜎) = id𝐾 (𝜎). Otherwise,
let 𝜎 be a simplex with 𝑣′ ∈ 𝜎, and let 𝜏 be any maximal simplex con-
taining 𝜎. Then 𝑖𝑟(𝜎) ∪ id𝐾 (𝜎) = 𝜎 ∪ {𝑣} ⊆ 𝜏. Hence 𝑖𝑟(𝜎) ∪ id𝐾 (𝜎) is a
simplex of 𝐾. Thus 𝑖𝑟 ∼ id𝐾 . Clearly 𝑟𝑖 = id𝐾−{𝑣′ } so that both 𝑖 and 𝑟
are strong homotopy equivalences and hence 𝐾 ≈ 𝐾 − {𝑣′ }. Inductively,
𝐾 ≈ 𝐿 whenever 𝐾 and 𝐿 have the same strong homotopy type. □
Definition 10.28. Let 𝐾 be a simplicial complex. The core of 𝐾 is the
minimal subcomplex 𝐾0 ⊆ 𝐾 such that 𝐾 ↘↘ 𝐾0 .
10.1. Strong homotopy 241
Remark 10.29. The notation 𝐾0 for the core of 𝐾 should not be confused
with the set of all 0-simplices of 𝐾, which is also denoted by 𝐾0 .
The use of the article “the” above is justified by the following:
Proposition 10.30. Let 𝐾 be a simplicial complex. Then the core of 𝐾
is unique up to isomorphism.
Proof. Let 𝐾1 and 𝐾2 be two cores of 𝐾. Now these are both obtained
by removing dominating points, so 𝐾1 ≈ 𝐾2 by Proposition 10.27. Fur-
thermore, because these cores are minimal, Lemma 10.25 implies that
𝐾1 and 𝐾2 are isomorphic. □
We may now summarize all of this work in the following theorem:
Theorem 10.31. [23] Let 𝐾 and 𝐿 be simplicial complexes. Then 𝐾 and
𝐿 have the same strong homotopy type if and only if there exists a strong
homotopy equivalence 𝑓 ∶ 𝐾 → 𝐿.
Proof. The forward direction is Proposition 10.27. For the backward
direction, assume 𝐾 ≈ 𝐿. Since 𝐾 ≈ 𝐾0 and 𝐿 ≈ 𝐿0 , 𝐾0 ≈ 𝐿0 . As before,
these strongly homotopic minimal cores are then isomorphic. Since they
are isomorphic, they clearly have the same strong homotopy type. Hence
𝐾 and 𝐿 also have the same strong homotopy type. □
As a special case of Theorem 10.31, we are able to address the point
taken up in Remark 10.14. Unlike the case of standard collapses, we will
never “get stuck” in a strongly collapsible complex.
Corollary 10.32. Let 𝐾 be a simplicial complex. Then 𝐾 is strongly col-
lapsible if and only if every sequence of strong collapses of 𝐾 collapses
to a single point. In other words, a simplicial complex 𝐾 has the strong
homotopy type of a point if and only if 𝐾 is strongly collapsible.
Remark 10.33. The beautiful thing about Theorem 10.31 is that, like
many other results in mathematics, it bridges two different viewpoints,
showing that they are in essence the same. Strong homotopy type is a
combinatorial definition where one checks a combinatorial condition.
Strong homotopy equivalence is more along the lines of a topological
definition, one involving interactions between several simplicial maps.
242 10. Strong discrete Morse theory
Yet Theorem 10.31 tells us that they are two sides of the same coin, one
from a combinatorial viewpoint, the other from a topological viewpoint.
This surprising juxtaposition of Theorem 10.31 is simply enchanting!
Problem 10.34. In Example 1.47, we showed that all cycles 𝐶𝑛 have the
same simple homotopy type. Do they have the same strong homotopy
type?
10.2. Strong discrete Morse theory
As is now apparent, the idea behind discrete Morse theory is extremely
simple: every simplicial complex can be broken down (or, equivalently,
built up) using only two moves: 1) perform an elementary collapse, and
2) remove a facet. This idea has led to many insights and relationships.
Given our work in Section 10.1, we now have a third move—namely,
a strong elementary collapse. Although this is a sequence of elemen-
tary standard collapses, a strong collapse, if found, does digest a com-
plex quite quickly. Hence, it is worth investigating what happens when
we add this “third move” into discrete Morse theory. This was first ac-
complished by Fernández-Ternero et al. in [62]. We present a slightly
different framework than the one utilized in the cited paper. We begin
by investigating the Hasse diagram, similar to our approach in Section
2.2.3.
Definition 10.35. Let 𝐾 be a simplicial complex and ℋ𝐾 the Hasse di-
agram of 𝐾. For any 1-simplex 𝑢𝑣 = 𝑒 ∈ ℋ𝐾 , define 𝐹𝑒 ∶= {𝜎 ∈ ℋ𝐾 ∶
𝑒 ∈ 𝜎}. Let 𝑆 ⊆ 𝐹𝑒 be any subset. The (𝑣, 𝑒)-strong vector generated
by 𝑆, denoted by 𝐵 𝑆 (𝑣, 𝑒), is defined by
𝐵 𝑆 (𝑣, 𝑒) ∶= {𝜎 ∈ ℋ𝐾 ∶ 𝑣 ≤ 𝜎 ≤ 𝜏 for some 𝜏 ∈ 𝑆}.
We sometimes write 𝐵(𝑣, 𝑒) when 𝑆 is clear from the context, and we
sometimes refer to it as a strong vector.
Exercise 10.36. Show that if 𝑆 ≠ ∅, then 𝑣, 𝑒 ∈ 𝐵 𝑆 (𝑣, 𝑒).
Any strong vector satisfying 𝐵 𝑆 (𝑣, 𝑒) = {𝑣, 𝑒} is called a Forman
vector. A pair 𝜎(𝑝) < 𝜏(𝑝+1) for which 𝑝 > 0 (i.e., not a Forman vector)
is a critical pair.
10.2. Strong discrete Morse theory 243
Example 10.37. Let 𝐾 be the simplicial complex
𝑣1
𝑣2
𝑣3 𝑣4
𝑣5 𝑣7
𝑣6
To save space, we write a simplex in the Hasse diagram ℋ𝐾 by concate-
nating its subscripts, e.g. 𝑖𝑗 stands for the simplex 𝑣𝑖 𝑣𝑗 .
123 124 235 256 246 467
35 56
12 13 14 23 24 25 26 46 47 67
1 2 3 4 5 6 7
Let 𝑒 = 𝑣1 𝑣2 . Then 𝐹𝑒 = {𝑣1 𝑣2 , 𝑣1 𝑣2 𝑣3 , 𝑣1 𝑣2 𝑣4 }. Pick 𝑣 = 𝑣1 , and
define 𝑆1 ∶= {𝑣1 𝑣2 }, 𝑆2 ∶= {𝑣1 𝑣2 𝑣3 }, 𝑆3 ∶= {𝑣1 𝑣2 𝑣4 }, and 𝑆4 ∶=
{𝑣1 𝑣2 𝑣3 , 𝑣1 𝑣2 𝑣4 }. We then generate all possible strong vectors on 𝐹𝑒 ,
namely,
𝐵 𝑆1 (𝑣, 𝑒) = {𝑣1 , 𝑣1 𝑣2 },
𝑆2
𝐵 (𝑣, 𝑒) = {𝑣1 , 𝑣1 𝑣2 , 𝑣1 𝑣3 , 𝑣1 𝑣2 𝑣3 },
𝐵 𝑆3 (𝑣, 𝑒) = {𝑣1 , 𝑣1 𝑣2 , 𝑣1 𝑣4 , 𝑣1 𝑣2 𝑣4 },
𝐵𝑆4 (𝑣, 𝑒) = {𝑣1 , 𝑣1 𝑣2 , 𝑣1 𝑣3 , 𝑣1 𝑣4 , 𝑣1 𝑣2 𝑣3 , 𝑣1 𝑣2 𝑣4 }.
244 10. Strong discrete Morse theory
We will show in Proposition 10.48 that a strong collapse is precisely
a strong vector. First, we give the definition of a strong discrete Morse
matching.
Definition 10.38. A strong discrete Morse matching ℳ is a partition
of ℋ𝐾 into strong vectors, critical pairs, and singletons. A singleton is
called a critical simplex. The index of a critical pair {𝛼(𝑝) , 𝛽 (𝑝+1) } is 𝑝+
1. We refer to the set of all critical pairs and critical simplices as critical
objects. The number of critical simplices of dimension 𝑖 is denoted by
𝑚𝑖 , while the number of critical pairs of index 𝑖 is denoted by 𝑝𝑖 . The set
of all critical objects of ℳ is denoted by scrit(ℳ).
Example 10.39. We give two examples of strong Morse matchings on
𝐾 from Example 10.37. Consider the sets 𝐹𝑣1 𝑣2 , 𝐹𝑣2 𝑣5 , 𝐹𝑣4 𝑣6 , 𝐹𝑣2 𝑣3 , 𝐹𝑣6 𝑣7 ,
and 𝐹𝑣2 𝑣6 . Then there exist subsets that generate the following strong
vectors:
𝐵(𝑣1 , 𝑣1 𝑣2 ) = {𝑣1 , 𝑣1 𝑣2 , 𝑣1 𝑣3 , 𝑣1 𝑣4 , 𝑣1 𝑣2 𝑣3 , 𝑣1 𝑣2 𝑣4 },
𝐵(𝑣5 , 𝑣2 𝑣5 ) = {𝑣5 , 𝑣2 𝑣5 , 𝑣5 𝑣6 , 𝑣3 𝑣5 , 𝑣2 𝑣5 𝑣6 , 𝑣2 𝑣3 𝑣5 },
𝐵(𝑣4 , 𝑣4 𝑣6 ) = {𝑣4 , 𝑣4 𝑣6 , 𝑣4 𝑣7 , 𝑣2 𝑣4 , 𝑣4 𝑣6 𝑣7 , 𝑣2 𝑣4 𝑣6 },
𝐵(𝑣3 , 𝑣2 𝑣3 ) = {𝑣3 , 𝑣2 𝑣3 },
𝐵(𝑣7 , 𝑣6 𝑣7 ) = {𝑣7 , 𝑣6 𝑣7 },
𝐵(𝑣2 , 𝑣2 𝑣6 ) = {𝑣2 , 𝑣2 𝑣6 }.
These strong vectors, along with the singleton {𝑣6 }, form a strong Morse
matching ℳ1 . The only critical object is the critical simplex 𝑣6 (the last
three strong vectors are Forman vectors).
We define another strong Morse matching ℳ2 . Using the same 𝐹 as
above, there exist subsets that generate the following strong vectors:
𝐵(𝑣1 , 𝑣1 𝑣2 ) = {𝑣1 , 𝑣1 𝑣2 , 𝑣1 𝑣3 , 𝑣1 𝑣2 𝑣3 },
𝐵(𝑣5 , 𝑣2 𝑣5 ) = {𝑣5 , 𝑣2 𝑣5 , 𝑣5 𝑣6 , 𝑣2 𝑣5 𝑣6 },
𝐵(𝑣4 , 𝑣4 𝑣6 ) = {𝑣4 , 𝑣4 𝑣6 , 𝑣4 𝑣7 , 𝑣2 𝑣4 , 𝑣4 𝑣6 𝑣7 , 𝑣2 𝑣4 𝑣6 },
𝐵(𝑣7 , 𝑣6 𝑣7 ) = {𝑣7 , 𝑣6 𝑣7 },
𝐵(𝑣2 , 𝑣2 𝑣6 ) = {𝑣2 , 𝑣2 𝑣6 }.
10.2. Strong discrete Morse theory 245
Critical pairs {𝑣1 𝑣4 , 𝑣1 𝑣2 𝑣4 } and {𝑣3 𝑣5 , 𝑣2 𝑣3 𝑣5 } are then combined with
these strong vectors, along with the singletons {𝑣3 }, {𝑣2 𝑣3 }, and {𝑣6 }, to
yield the strong Morse matching ℳ2 . Here we have five critical objects,
consisting of two critical pairs and three critical simplices.
Definition 10.40. A strong discrete Morse function 𝑓 on 𝐾 with re-
spect to a strong Morse matching ℳ is a function 𝑓 ∶ 𝐾 → ℝ satisfy-
ing 𝑓(𝛼) ≤ 𝑓(𝛽) whenever 𝛼 < 𝛽, with 𝑓(𝛼) = 𝑓(𝛽) if and only if there is
a set 𝐼 in the partition ℳ such that 𝛼, 𝛽 ∈ 𝐼. We set scrit(𝑓) ∶= scrit(ℳ).
Note that one obtains a flat discrete Morse function in the sense of
Forman by taking a partition consisting of only Forman vectors, critical
pairs, and singletons.
Example 10.41. We now put a strong discrete Morse function on the
simplicial complex 𝐾 using the strong Morse matchings ℳ1 and ℳ2
from Example 10.39. A strong discrete Morse function with respect to
ℳ1 is given by
6
6 6 6
6 6
2
3 3 4 4
5 4
5 5 2 4 4
5 4
5 5 1 1
0
246 10. Strong discrete Morse theory
while a strong discrete Morse function with respect to ℳ2 is given
by
7
7 7 8
7 8
2
1 3 5 5
7 5
7 6 2 5 5
6 5
6 6 4 4
0
Exercise 10.42. Verify that the following labeling is a strong discrete
Morse function on the Argentinian complex 𝐴. Identify the critical sim-
plices and critical pairs.
8
9 8
10 9 8
10 8
7 7 1
7
7 1
7 6
7 2
0
4 5
5
3 3 2
We may also combine strong discrete Morse functions to obtain new
ones.
10.2. Strong discrete Morse theory 247
Problem 10.43. Let 𝑓1 ∶ 𝐾1 → ℝ and 𝑓2 ∶ 𝐾2 → ℝ be strong discrete
Morse functions with strong Morse matchings ℳ1 and ℳ2 , respectively.
Prove that (𝑓1 +𝑓2 ) ∶ 𝐾1 ∩𝐾2 → ℝ is also a strong discrete Morse function
with strong Morse matching given by ℳ ∶= {𝐼 ∩ 𝐽 ∶ 𝐼 ∈ ℳ1 , 𝐽 ∈ ℳ2 , 𝐼 ∩
𝐽 ≠ ∅}. You may need to adjust 𝑓1 + 𝑓2 by adding a small amount 𝜖
to some values to ensure that simplices in different sets of the partition
have different values.
Problem 10.44. Find the strong discrete Morse function obtained by
adding together the two strong discrete Morse functions in Example 10.41.
What is the corresponding strong Morse matching?
Strong discrete Morse functions are inspired by and similar to gen-
eralized discrete Morse functions.
Problem 10.45. Give an example to show that a strong vector need not
be an interval (Definition 2.75) and an interval need not be a strong vec-
tor.
Problem 10.46. Let 𝐾 be a simplicial complex, and let 𝑣 ∈ 𝐾 with 𝑣 < 𝜎.
If 𝑆 ∶= {𝜎}, show that [𝑣, 𝜎] = 𝐵 𝑆 (𝑣, 𝑒) for all edges 𝑣 < 𝑒 < 𝜎.
We now prove a collapsing theorem for strong discrete Morse func-
tions.
Theorem 10.47. Let 𝑓 ∶ 𝐾 → ℝ be a strong discrete Morse function,
and suppose that (𝑎, 𝑏], 𝑎 < 𝑏, is a real interval containing no critical
values. Then 𝐾(𝑏) ↘↘ 𝐾(𝑎).
Proof. Let ℳ be the strong Morse matching for 𝑓. If (𝑎, 𝑏] contains no
regular values, then we are done. By partitioning (𝑎, 𝑏] into subintervals,
we may assume without loss of generality that (𝑎, 𝑏] contains exactly one
regular value 𝑐 ∈ (𝑎, 𝑏]. Then there is a set 𝐼 in ℳ such that 𝑐 = 𝑓(𝛼)
for every 𝛼 ∈ 𝐼. Furthermore, since 𝑐 is a regular value by supposition,
𝐼 = 𝐵(𝑣, 𝑒) for some 𝑣 and 𝑒 = 𝑣𝑢. We claim that 𝑣 is dominated by
𝑢 in 𝐾(𝑏). If so, we may perform a strong collapse by removing 𝑣 (and
all simplices containing it) from 𝐾(𝑏), yielding 𝐾(𝑎). To see that 𝑣 is
dominated by 𝑢 in 𝐾(𝑏), let 𝜎 be any facet of 𝑣 in 𝐾(𝑏). Since 𝑣 < 𝜎,
𝑓(𝑣) ≤ 𝑓(𝜎), and since there is only one regular value in (𝑎, 𝑏], it must
be that 𝑓(𝑣) = 𝑓(𝜎) = 𝑐. Hence 𝜎 ∈ 𝐵(𝑣, 𝑒) so that 𝑣 ≤ 𝜎 ≤ 𝜏 for some
248 10. Strong discrete Morse theory
𝜏 satisfying 𝑒 < 𝜏. As above, we must have 𝑓(𝜏) = 𝑐, so 𝜏 ∈ 𝐾(𝑏). By
supposition, 𝜎 is a facet in 𝐾(𝑏), so 𝜎 = 𝜏. Since 𝑢𝑣 = 𝑒 < 𝜏, we have
𝑢 ∈ 𝜎. Thus 𝑢 dominates 𝑣 in 𝐾(𝑏), and 𝐾(𝑏) ↘↘ 𝐾(𝑎). □
We then obtain a characterization for strongly collapsible simplicial
complexes in terms of a strong matching on the Hasse diagram.
Proposition 10.48. Let 𝐾 be a simplicial complex. There is a strong
discrete Morse function 𝑓 ∶ 𝐾 → ℝ with only one critical value if and
only if 𝐾 is strongly collapsible.
Proof. The forward direction is simply Theorem 10.47. For the reverse
direction, let 𝐾 be strongly collapsible via a sequence of 𝑛 strong col-
lapses. It suffices to show that the simplices in a single strong collapse
correspond to a strong vector in the Hasse diagram. The result will then
follow by induction on the strong collapses by associating a strong vector
to each strong collapse (and the unique critical vertex with a singleton).
Since 𝐾 is strongly collapsible, there exists a vertex 𝑣𝑛′ dominated by a
vertex 𝑣𝑛 along edge 𝑒𝑛 = 𝑣𝑛 𝑣𝑛′ . Label the simplices of the strong vec-
tor 𝐵 𝑆𝑛 (𝑣𝑛′ , 𝑒𝑛 ) with 𝑆𝑛 ∶= 𝐹𝑒𝑛 with value 𝑛; i.e., define 𝑓(𝜎) = 𝑛 for
every 𝜎 ∈ 𝐵 𝑆𝑛 (𝑣𝑛′ , 𝑒𝑛 ). Perform the strong collapse and let 𝐾 − {𝑣𝑛′ } be
the resulting complex. Since 𝐾 − {𝑣𝑛′ } is strongly collapsible, there exists
′ ′
a vertex 𝑣𝑛−1 dominated by a vertex 𝑣𝑛−1 along edge 𝑒𝑛−1 = 𝑣𝑛−1 𝑣𝑛−1 .
𝑆𝑛−1 ′
Taking 𝑆𝑛−1 = 𝐹𝑒𝑛−1 ∩ (𝐾 − {𝑣𝑛 }), we see that 𝐵 (𝑣𝑛−1 , 𝑒𝑛−1 ) yields
another strong vector. Label these values 𝑛 − 1. Continuing in this man-
ner, we obtain a strong discrete Morse matching on ℋ𝐾 .
It remains to show that the labeling specified satisfies the properties
of a strong discrete Morse function. Clearly two simplices are given the
same label if and only if they are in the same set of the partition. Let
𝛼 < 𝛽 and suppose by contradiction that 𝑘 = 𝑓(𝛼) > 𝑓(𝛽) = ℓ. Then,
given the order specified by the strong collapses, 𝛼 was part of a strong
collapse, other strong collapses were performed, and then 𝛽 was part of
a strong collapse. But this is impossible since 𝛼 < 𝛽 and results of strong
collapses need to remain simplicial complexes. □
10.3. Simplicial LS category 249
10.3. Simplicial Lusternik-Schnirelmann category
By now, you have discerned a principled theme of this book: once we
have a notion of what it means for two objects to be the “same,” we de-
sire ways to tell them apart. In Chapter 1, we introduced simple homo-
topy type and have spent much effort developing tools such as the Eu-
ler characteristic and Betti numbers, allowing us to tell simplicial com-
plexes apart up to simple homotopy. For example, Proposition 1.42 tells
us that if 𝐾 and 𝐿 have the same simple homotopy type, then they have
the same Euler characteristic. The contrapositive of this is most useful
for telling simplicial complexes apart; that is, if 𝜒(𝐾) ≠ 𝜒(𝐿), then 𝐾 and
𝐿 do not have the same simple homotopy type. This section is devoted
to developing not only a strong homotopy invariant (Corollary 10.65),
but one that has a nice relationship with the number of critical objects
of a strong discrete Morse function (Theorem 10.70). We first define our
object of study.
Definition 10.49. Let 𝐾 be a simplicial complex. We say that a
collection of subcomplexes {𝑈0 , 𝑈1 , … , 𝑈𝑛 } is a covering or covers 𝐾 if
𝑛
⋃ 𝑈𝑖 = 𝐾.
𝑖=0
Definition 10.50. Let 𝑓 ∶ 𝐾 → 𝐿 be a simplicial map. The simplicial
Lusternik-Schnirelmann category, simplicial LS category, or sim-
plicial category of 𝑓, denoted by scat(𝑓), is the least integer 𝑛 such that
𝐾 can be covered by subcomplexes 𝑈0 , 𝑈1 , … , 𝑈𝑛 with 𝑓|𝑈 ∼ ∗ for all
𝑗
0 ≤ 𝑗 ≤ 𝑛. We call 𝑈0 , … , 𝑈𝑛 a categorical cover of 𝑓.
Remark 10.51. The original Lusternik-Schnirelmann category of a
topological space was defined in 1934 by the Soviet mathematicians L.
Lusternik and L. Schnirelmann [114]. It has generated a great deal of
research, several variations, and many related invariants. See, for ex-
ample, the book-length treatment [47]. However, it was not until 2015
that Fernández-Ternero et al. defined a simplicial version of the LS cat-
egory (Definition 10.52), one which was defined purely for simplicial
complexes [61, 63]. Our definition of the simplicial category of a map,
first studied in [140], is a slight generalization of the original definition,
250 10. Strong discrete Morse theory
as we will show in Proposition 10.53. Other versions of the Lusternik-
Schnirelmann category for simplicial complexes are studied in [2, 79]
and [146, 147].
Definition 10.52. A subcomplex 𝑈 ⊆ 𝐾 is called categorical in 𝐾 if
𝑖𝑈 ∶ 𝑈 → 𝐾 is null-homotopic, where 𝑖𝑈 is the inclusion. The simpli-
cial category of 𝐾, denoted by scat(𝐾), is the least integer 𝑛 such that
𝐾 can be covered by 𝑛 + 1 categorical sets in 𝐾.
We now show that scat(𝐾) is just a special case of Definition 10.50,
Proposition 10.53. Let 𝐾 be a simplicial complex. Then scat(𝐾) =
scat(id𝐾 ).
Proof. Assume that scat(id𝐾 ) = 𝑛. Then 𝐾 can be covered by subcom-
plexes 𝑈0 , 𝑈1 , … , 𝑈𝑛 with id𝐾 |𝑈 ∼ ∗. Since id𝐾 |𝑈 = 𝑖𝑈𝑗 and id𝐾 |𝑈 ∼
𝑗 𝑗 𝑗
∗, 𝑖𝑈𝑗 ∼ ∗ so that 𝑈0 , 𝑈1 , … , 𝑈𝑛 forms a categorical cover of 𝐾. The same
reasoning shows that a categorical cover of 𝐾 yields a categorical cover
of id𝐾 . Thus scat(id𝐾 ) = scat(𝐾). □
We compute a few examples.
Example 10.54. Let 𝐴 be the Argentinian complex from Example 10.13.
We will cover 𝐴 with two categorical sets. These will be given by {𝜏,̄ 𝐴 −
{𝜎, 𝜏}} where 𝜎 and 𝜏 are the free pair below and 𝜏 ̄ is the subcomplex of
𝐴 generated by 𝜏 (Definition 1.4).
𝑣′
𝜎 𝜏
𝑏
𝑎
10.3. Simplicial LS category 251
We will explicitly show that the inclusion 𝑖 ∶ 𝜏 ̄ → 𝐴 is null-homotopic.
We claim that 𝑖 ∼ ∗ where ∗ ∶ 𝜏 ̄ → 𝐴 is given by sending every vertex of
𝜏 ̄ to 𝑣′ . We need to check that if 𝛼 ∈ 𝜏 ̄ is a simplex, then 𝑖(𝛼) ∪ ∗(𝛼) is a
simplex in 𝐴. We have
𝑖(𝑎) ∪ ∗(𝑎) = 𝑎𝑣′ ,
𝑖(𝑏) ∪ ∗(𝑏) = 𝑏𝑣′ ,
𝑖(𝑣′ ) ∪ ∗(𝑣′ ) = 𝑣′ ,
𝑖(𝑎𝑏) ∪ ∗(𝑎𝑏) = 𝑎𝑏𝑣′ ,
′ ′
𝑖(𝑎𝑣 ) ∪ ∗(𝑎𝑣 ) = 𝑎𝑣′ ,
𝑖(𝑏𝑣′ ) ∪ ∗(𝑏𝑣′ ) = 𝑏𝑣′ ,
𝑖(𝑎𝑏𝑣′ ) ∪ ∗(𝑎𝑏𝑣′ ) = 𝑎𝑏𝑣′ .
Checking over all seven simplices, we obtain a simplex of 𝐴. Hence
𝑖 ∼𝑐 ∗ so that 𝑖 ∼ ∗. Another straightforward but more tedious calcu-
lation shows that the inclusion of 𝐴 − {𝜎, 𝜏} into 𝐴 is null-homotopic.
Hence scat(𝐴) ≤ 1.
That scat(𝐴) > 0 will follow from Corollary 10.67 below.
Example 10.55. An interesting example is given by a 1-dimensional
simplicial complex on 𝑛 vertices with an edge between every pair of ver-
tices, i.e., the complete graph on 𝑛 vertices, denoted by 𝐾𝑛 . A sub-
complex 𝑈 of 𝐾𝑛 is categorical if and only if 𝑈 is a forest (Problem 10.56).
Hence if 𝑈 is categorical in 𝐾𝑛 , the maximum number of edges that 𝑈
can have is 𝑛 − 1. This follows from the fact that 𝑣 − 𝑒 = 𝑏0 − 𝑏1 by
𝑛(𝑛−1) 𝑛
Theorem 3.23. Since 𝐾𝑛 has a total of edges, 𝐾𝑛 needs at least ⌈ ⌉
2 2
categorical sets in a cover.
Problem 10.56. Prove that if a subgraph 𝑈 of a connected graph 𝐺 is a
forest, then 𝑈 is categorical in 𝐺.
The converse of Problem 10.56 is proved in [61].
Problem 10.57. Let 𝐾 be a simplicial complex and Σ𝐾 the suspension
of 𝐾. Prove that scat(Σ𝐾) ≤ 1.
252 10. Strong discrete Morse theory
There are several simple yet key properties of scat.
Proposition 10.58. If 𝑓 ∶ 𝐾 → 𝐿 and 𝑔 ∶ 𝐿 → 𝑀, then
scat(𝑔 ∘ 𝑓) ≤ min{scat(𝑔), scat(𝑓)}.
Proof. Suppose that 𝑓 ∶ 𝐾 → 𝐿 and 𝑔 ∶ 𝐿 → 𝑀. We will show that
scat(𝑔 ∘ 𝑓) is less than or equal to both scat(𝑔) and scat(𝑓). The result
then follows. We first show that scat(𝑔 ∘ 𝑓) ≤ scat(𝑓). Write scat(𝑓) = 𝑛
so that there exist 𝑈0 , 𝑈1 , … , 𝑈𝑛 ⊆ 𝐾 covering 𝐾 with 𝑓|𝑈 ∼ ∗. We claim
𝑗
that (𝑔 ∘ 𝑓)|𝑈 ∼ ∗. Observe that (𝑔 ∘ 𝑓)|𝑈 = 𝑔 ∘ (𝑓|𝑈 ) ∼ 𝑔 ∘ ∗ ∼ ∗.
𝑗 𝑗 𝑗
Thus, scat(𝑔 ∘ 𝑓) ≤ scat(𝑓) = 𝑛.
Now write scat(𝑔) = 𝑚. Then there exist 𝑉0 , 𝑉1 , … , 𝑉𝑚 ⊆ 𝐿 such that
𝑔|𝑉 ∼ ∗. Define 𝑈𝑗 ∶= 𝑓−1 (𝑉𝑗 ) for all 0 ≤ 𝑗 ≤ 𝑚. Then each 𝑈𝑗 is a
𝑗
subcomplex of 𝐾 and hence forms a cover of 𝐾. The diagram
(𝑔∘𝑓)||
𝑈𝑗
𝑈𝑗 /
rrr8 𝑀
r
𝑓||
rrrrr
rrrr 𝑔||𝑉
𝑈𝑗
𝑗
𝑉𝑗
commutes up to strong homotopy; that is, 𝑔|𝑉 ∘ 𝑓|𝑈 ∼ 𝑔 ∘ 𝑓|𝑈 . Since
𝑗 𝑗 𝑗
𝑔|𝑉 ∼ ∗, we have (𝑔 ∘ 𝑓)|𝑈 ∼ ∗. □
𝑗 𝑗
We now show that simplicial category is well-defined up to strong
homotopy. First we give a lemma.
Lemma 10.59. Let 𝑓, 𝑔 ∶ 𝐾 → 𝐿 and let 𝑈 ⊆ 𝐾 be a subcomplex. If
𝑔|𝑈 ∼ ∗ and 𝑓 ∼ 𝑔, then 𝑓|𝑈 ∼ ∗.
Proof. We may assume without loss of generality that 𝑓 ∼𝑐 𝑔, as the
general result follows by induction. By definition of 𝑓 ∼𝑐 𝑔, 𝑓(𝜎) ∪ 𝑔(𝜎)
is a simplex in 𝐿, so 𝑓|𝑈 (𝜎)∪𝑔|𝑈 (𝜎) is also a simplex in 𝐿. Thus, 𝑓|𝑈 ∼𝑐
𝑗 𝑗 𝑗
𝑔|𝑈 ∼ ∗ and the result follows. □
𝑗
Proposition 10.60. Let 𝑓, 𝑔 ∶ 𝐾 → 𝐿. If 𝑓 ∼ 𝑔, then scat(𝑓) = scat(𝑔).
10.3. Simplicial LS category 253
Problem 10.61. Prove Proposition 10.60.
A simple consequence of Propositions 10.58 and 10.60 is the follow-
ing lemma:
Lemma 10.62. Suppose
𝑓
𝐾 /𝐿
O
𝑔
𝐽 /𝑀
commutes up to strong homotopy. Then scat(𝑓) ≤ scat(𝑔).
It then follows that the simplicial category of a map bounds from
below the simplicial category of either simplicial complex.
Proposition 10.63. Let 𝑓 ∶ 𝐾 → 𝐿. Then
scat(𝑓) ≤ min{scat(𝐾), scat(𝐿)}.
Proof. Apply Lemma 10.62 to the diagrams
𝑓
𝐾 /𝐿
O
id𝐾
𝐾 /𝐾
and
𝑓
𝐾 /𝐿
O
id𝐿
𝐿 /𝐿
□
Exercise 10.64. Give an example to show that the inequality in Propo-
sition 10.63 can be strict.
It now follows that scat remains the same under strong homotopy
type.
254 10. Strong discrete Morse theory
Corollary 10.65. If 𝑓 ∶ 𝐾 → 𝐿 is a strong homotopy equivalence, then
scat(𝑓) = scat(𝐾) = scat(𝐿).
Problem 10.66. Prove Corollary 10.65.
An immediate corollary of Corollary 10.65 is the following:
Corollary 10.67. Let 𝐾 be a simplicial complex. Then scat(𝐾) = 0 if
and only if 𝐾 is strongly collapsible.
Hence, scat acts as a kind of quantification to tell us how close or far
a simplicial complex is from being strongly collapsible.
10.3.1. Simplicial LS theorem. This short and final subsection is de-
voted to proving the simplicial Lusternik-Schnirelmann theorem, a the-
orem relating scat(𝐾) to the number of critical objects of any strong dis-
crete Morse function on 𝐾.
The next lemma follows by using the covers of 𝐾 and 𝐿 to cover 𝐾∪𝐿.
Lemma 10.68. Let 𝐾 and 𝐿 be two simplicial complexes. Then scat(𝐾 ∪
𝐿) ≤ scat(𝐾) + scat(𝐿) + 1.
Problem 10.69. Prove Lemma 10.68.
Theorem 10.70. Let 𝑓 ∶ 𝐾 → ℝ be a strong discrete Morse function.
Then
scat(𝐾) + 1 ≤ | scrit(𝑓)|.
Proof. For any natural number 𝑛, define
𝑐𝑛 ∶= min{𝑎 ∈ ℝ ∶ scat(𝐾(𝑎)) ≥ 𝑛 − 1}.
We claim that 𝑐𝑛 is a critical value of 𝑓. Suppose by contradiction that
𝑐𝑛 is a regular value. Since there are only finitely many values of im(𝑓),
there exists a largest value 𝑏 ∈ im(𝑓) such that 𝑏 < 𝑐𝑛 . Since there are
no critical values in [𝑏, 𝑐𝑛 ], by Theorem 10.47 we have 𝐾(𝑐𝑛 ) ↘↘ 𝐾(𝑏).
By Corollary 10.65, scat(𝐾(𝑐𝑛 )) = scat(𝐾(𝑏)), contradicting the fact that
𝑐𝑛 is minimum. Thus 𝑐𝑛 must be critical.
It remains to show that the addition of a critical object increases scat
by at most 1. That is, we need to show that if 𝜎 is a critical simplex, then
10.3. Simplicial LS category 255
scat(𝐾(𝑎) ∪ {𝜎}) ≤ scat(𝐾(𝑎)) + 1. This follows from Lemma 10.68 since
scat(𝜎)̄ = 0 where 𝜎̄ is the smallest simplicial complex containing 𝜎. The
same argument shows the result when attaching a critical pair. □
Example 10.71. We show that the bound in Theorem 10.70 is sharp.
Let 𝐴 be the Argentinian complex along with the discrete Morse func-
tion from Exercise 10.42. This satisfies | scrit(𝑓)| = 2. Since 𝐴 has no
dominating vertex, scat(𝐴) > 0; hence scat(𝐴) + 1 = | scrit(𝑓)| = 2.
Now we will show that the inequality can be strict. Consider the
labeling of the vertices of 𝐴 below.
𝑐
𝑎 𝑏
Let 𝐴′ ∶= 𝐴 ∪ {𝑎𝑏𝑐}, gluing a 2-simplex 𝑎𝑏𝑐 to 𝐴. It is easy to show that
𝑏2 (𝐴′ ) = 1, so by the discrete Morse inequalities (Theorem 4.1), it follows
that every discrete Morse function 𝑓 defined on 𝐴′ (and in particular
every strong discrete Morse function) has at least two critical simplices:
a critical vertex and a critical 2-simplex. Clearly 𝐴′ does not contain any
dominated vertex. Moreover, the removal of any critical 2-simplex does
not result in any dominating vertices, so at least one critical pair arises in
order to collapse to a subcomplex containing dominated vertices. Hence,
any strong discrete Morse function 𝑓 ∶ 𝐴′ → ℝ must have at least three
critical objects. Furthermore, we can cover 𝐴′ with two categorical sets
so that scat(𝐴′ ) = 1. Hence scat(𝐴′ ) + 1 = 2 < 3 ≤ | scrit(𝑓)|.
Problem 10.72. Find a categorical cover of 𝐴′ of size 2.
Problem 10.73. Verify that 𝑏2 (𝐴′ ) = 1.
Bibliography
[1] S. E. Aaronson, M. E. Meyer, N. A. Scoville, M. T. Smith, and L. M. Stibich, Graph isomorphisms
in discrete Morse theory, AKCE Int. J. Graphs Comb. 11 (2014), no. 2, 163–176. MR3243115
[2] S. Aaronson and N. A. Scoville, Lusternik-Schnirelmann category for simplicial complexes, Illinois
J. Math. 57 (2013), no. 3, 743–753. MR3275736
[3] K. Adiprasito and B. Benedetti, Tight complexes in 3-space admit perfect discrete Morse functions,
European J. Combin. 45 (2015), 71–84, DOI 10.1016/j.ejc.2014.10.002. MR3286622
[4] K. A. Adiprasito, B. Benedetti, and F. H. Lutz, Extremal examples of collapsible complexes
and random discrete Morse theory, Discrete Comput. Geom. 57 (2017), no. 4, 824–853, DOI
10.1007/s00454-017-9860-4. MR3639606
[5] M. Agiorgousis, B. Green, A. Onderdonk, N. A. Scoville, and K. Rich, Homological sequences in
discrete Morse theory, Topology Proc. 54 (2019), 283–294.
[6] R. Ayala, L. M. Fernández, D. Fernández-Ternero, and J. A. Vilches, Discrete Morse theory
on graphs, Topology Appl. 156 (2009), no. 18, 3091–3100, DOI 10.1016/j.topol.2009.01.022.
MR2556069
[7] R. Ayala, L. M. Fernández, A. Quintero, and J. A. Vilches, A note on the pure Morse complex
of a graph, Topology Appl. 155 (2008), no. 17-18, 2084–2089, DOI 10.1016/j.topol.2007.04.023.
MR2457993
[8] R. Ayala, L. M. Fernández, and J. A. Vilches, Critical elements of proper discrete Morse functions,
Math. Pannon. 19 (2008), no. 2, 171–185. MR2553730
[9] R. Ayala, L. M. Fernández, and J. A. Vilches, Characterizing equivalent discrete Morse func-
tions, Bull. Braz. Math. Soc. (N.S.) 40 (2009), no. 2, 225–235, DOI 10.1007/s00574-009-0010-3.
MR2511547
[10] R. Ayala, L. M. Fernández, and J. A. Vilches, Discrete Morse inequalities on infinite graphs, Elec-
tron. J. Combin. 16 (2009), no. 1, Research Paper 38, 11 pp. MR2491640
[11] R. Ayala, D. Fernández-Ternero, and J. A. Vilches, Counting excellent discrete Morse functions on
compact orientable surfaces, Image-A: Applicable Mathematics in Image Engineering 1 (2010),
no. 1, 49–56.
[12] R. Ayala, D. Fernández-Ternero, and J. A. Vilches, The number of excellent discrete Morse functions
on graphs, Discrete Appl. Math. 159 (2011), no. 16, 1676–1688, DOI 10.1016/j.dam.2010.12.011.
MR2825610
257
258 Bibliography
[13] R. Ayala, D. Fernández-Ternero, and J. A. Vilches, Perfect discrete Morse functions on 2-complexes,
Pattern Recognition Letters 33 (2012), no. 11, pp. 1495–1500.
[14] R. Ayala, D. Fernández-Ternero, and J. A. Vilches, Perfect discrete Morse functions on triangulated
3-manifolds, Computational topology in image context, Lecture Notes in Comput. Sci., vol. 7309,
Springer, Heidelberg, 2012, pp. 11–19, DOI 10.1007/978-3-642-30238-1_2. MR2983410
[15] R. Ayala, J. A. Vilches, G. Jerše, and N. M. Kosta, Discrete gradient fields on infinite complexes,
Discrete Contin. Dyn. Syst. 30 (2011), no. 3, 623–639, DOI 10.3934/dcds.2011.30.623. MR2784612
[16] E. Babson and P. Hersh, Discrete Morse functions from lexicographic orders, Trans. Amer. Math.
Soc. 357 (2005), no. 2, 509–534, DOI 10.1090/S0002-9947-04-03495-6. MR2095621
[17] T. Banchoff, Critical points and curvature for embedded polyhedra, J. Differential Geometry 1
(1967), 245–256. MR0225327
[18] T. F. Banchoff, Critical points and curvature for embedded polyhedral surfaces, Amer. Math.
Monthly 77 (1970), 475–485, DOI 10.2307/2317380. MR0259812
[19] T. F. Banchoff, Critical points and curvature for embedded polyhedra. II, Differential geometry
(College Park, Md., 1981/1982), Progr. Math., vol. 32, Birkhäuser Boston, Boston, MA, 1983,
pp. 34–55. MR702526
[20] J. Bang-Jensen and G. Gutin, Digraphs. Theory, algorithms and applications, 2nd ed., Springer
Monographs in Mathematics, Springer-Verlag London, Ltd., London, 2009. MR2472389
[21] J. A. Barmak, Algebraic topology of finite topological spaces and applications, Lecture Notes in
Mathematics, vol. 2032, Springer, Heidelberg, 2011. MR3024764
[22] J. A. Barmak and E. G. Minian, Simple homotopy types and finite spaces, Adv. Math. 218 (2008),
no. 1, 87–104, DOI 10.1016/j.aim.2007.11.019. MR2409409
[23] J. A. Barmak and E. G. Minian, Strong homotopy types, nerves and collapses, Discrete Comput.
Geom. 47 (2012), no. 2, 301–328, DOI 10.1007/s00454-011-9357-5. MR2872540
[24] U. Bauer, Persistence in discrete Morse theory, Ph.D. thesis, Georg-August-Universität Göttigen,
2011.
[25] U. Bauer and H. Edelsbrunner, The Morse theory of Čech and Delaunay filtrations, Computational
geometry (SoCG’14), ACM, New York, 2014, pp. 484–490. MR3382330
[26] U. Bauer and H. Edelsbrunner, The Morse theory of Čech and Delaunay complexes, Trans. Amer.
Math. Soc. 369 (2017), no. 5, 3741–3762, DOI 10.1090/tran/6991. MR3605986
[27] B. Benedetti, Discrete Morse Theory is at least as perfect as Morse theory, arXiv e-prints (2010),
arXiv:1010.0548.
[28] B. Benedetti, Discrete Morse theory for manifolds with boundary, Trans. Amer. Math. Soc. 364
(2012), no. 12, 6631–6670, DOI 10.1090/S0002-9947-2012-05614-5. MR2958950
[29] B. Benedetti, Smoothing discrete Morse theory, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 16 (2016),
no. 2, 335–368. MR3559605
[30] B. Benedetti and F. Lutz, Library of triangulations, Available at http://page.math.tu-berlin.
de/~lutz/stellar/library_of_triangulations/.
[31] B. Benedetti and F. H. Lutz, The dunce hat and a minimal non-extendably collapsible 3-ball, Elec-
tronic Geometry Models 16 (2013), Model 2013.10.001.
[32] B. Benedetti and F. H. Lutz, Knots in collapsible and non-collapsible balls, Electron. J. Combin. 20
(2013), no. 3, Paper 31, 29 pp. MR3104529
[33] B. Benedetti and F. H. Lutz, Random discrete Morse theory and a new library of triangulations,
Exp. Math. 23 (2014), no. 1, 66–94, DOI 10.1080/10586458.2013.865281. MR3177457
[34] M. Bestvina, PL Morse theory, Math. Commun. 13 (2008), no. 2, 149–162. MR2488666
[35] M. Bestvina and N. Brady, Morse theory and finiteness properties of groups, Invent. Math. 129
(1997), no. 3, 445–470, DOI 10.1007/s002220050168. MR1465330
Bibliography 259
[36] E. D. Bloch, Polyhedral representation of discrete Morse functions, Discrete Math. 313 (2013),
no. 12, 1342–1348, DOI 10.1016/j.disc.2013.02.020. MR3061119
[37] R. Bott, Morse theory indomitable, Inst. Hautes Études Sci. Publ. Math. 68 (1988), 99–114 (1989).
MR1001450
[38] B. Brost, Computing persistent homology via discrete Morse theory, Master’s thesis, University of
Copenhagen, December 2013.
[39] N. A. Capitelli and E. G. Minian, A simplicial complex is uniquely determined by its set of discrete
Morse functions, Discrete Comput. Geom. 58 (2017), no. 1, 144–157, DOI 10.1007/s00454-017-
9865-z. MR3658332
[40] E. Chambers, E. Gasparovic, and K. Leonard, Medial fragments for segmentation of articulating
objects in images, Research in shape analysis, Assoc. Women Math. Ser., vol. 12, Springer, Cham,
2018, pp. 1–15. MR3859054
[41] M. K. Chari, On discrete Morse functions and combinatorial decompositions (English, with Eng-
lish and French summaries), Discrete Math. 217 (2000), no. 1-3, 101–113, DOI 10.1016/S0012-
365X(99)00258-7. Formal power series and algebraic combinatorics (Vienna, 1997). MR1766262
[42] M. K. Chari and M. Joswig, Complexes of discrete Morse functions, Discrete Math. 302 (2005),
no. 1-3, 39–51, DOI 10.1016/j.disc.2004.07.027. MR2179635
[43] G. Chartrand, L. Lesniak, and P. Zhang, Graphs & digraphs, 6th ed., Textbooks in Mathematics,
CRC Press, Boca Raton, FL, 2016. MR3445306
[44] Y.-M. Chung and S. Day, Topological fidelity and image thresholding: a persistent homology ap-
proach, J. Math. Imaging Vision 60 (2018), no. 7, 1167–1179, DOI 10.1007/s10851-018-0802-4.
MR3832139
[45] M. M. Cohen, A course in simple-homotopy theory, Graduate Texts in Mathematics, vol. 10,
Springer-Verlag, New York-Berlin, 1973. MR0362320
[46] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to algorithms, 3rd ed., MIT
Press, Cambridge, MA, 2009. MR2572804
[47] O. Cornea, G. Lupton, J. Oprea, and D. Tanré, Lusternik-Schnirelmann category, Mathemati-
cal Surveys and Monographs, vol. 103, American Mathematical Society, Providence, RI, 2003.
MR1990857
[48] J. Curry, R. Ghrist, and V. Nanda, Discrete Morse theory for computing cellular sheaf cohomology,
Found. Comput. Math. 16 (2016), no. 4, 875–897, DOI 10.1007/s10208-015-9266-8. MR3529128
[49] T. K. Dey, J. Wang, and Y. Wang, Graph reconstruction by discrete Morse theory, arXiv e-prints
(2018), arXiv:1803.05093.
[50] M. de Longueville, A course in topological combinatorics, Universitext, Springer, New York, 2013.
MR2976494
[51] V. de Silva and G. Ghrist, Coordinate-free coverage in sensor networks with controlled boundaries,
Intl. J. Robotics Research 25 (2006), no. 12, 1205–1222.
[52] P. Dłotko and H. Wagner, Computing homology and persistent homology using iterated Morse de-
composition, arXiv e-prints (2012), arXiv:1210.1429.
[53] P. Dłotko and H. Wagner, Simplification of complexes for persistent homology computations, Ho-
mology Homotopy Appl. 16 (2014), no. 1, 49–63, DOI 10.4310/HHA.2014.v16.n1.a3. MR3171260
[54] H. Edelsbrunner, A short course in computational geometry and topology, SpringerBriefs in Ap-
plied Sciences and Technology, Springer, Cham, 2014. MR3328629
[55] H. Edelsbrunner and J. L. Harer, Computational topology. An introduction, American Mathemat-
ical Society, Providence, RI, 2010. MR2572029
[56] H. Edelsbrunner, D. Letscher, and A. Zomorodian, Topological persistence and simplification, Dis-
crete Comput. Geom. 28 (2002), no. 4, 511–533, DOI 10.1007/s00454-002-2885-2. Discrete and
computational geometry and graph drawing (Columbia, SC, 2001). MR1949898
260 Bibliography
[57] H. Edelsbrunner, A. Nikitenko, and M. Reitzner, Expected sizes of Poisson-Delaunay mosaics
and their discrete Morse functions, Adv. in Appl. Probab. 49 (2017), no. 3, 745–767, DOI
10.1017/apr.2017.20. MR3694316
[58] S. P. Ellis and A. Klein, Describing high-order statistical dependence using “concurrence topology,”
with application to functional MRI brain data, Homology Homotopy Appl. 16 (2014), no. 1, 245–
264, DOI 10.4310/HHA.2014.v16.n1.a14. MR3211745
[59] A. Engström, Discrete Morse functions from Fourier transforms, Experiment. Math. 18 (2009),
no. 1, 45–53. MR2548985
[60] D. Farley and L. Sabalka, Discrete Morse theory and graph braid groups, Algebr. Geom. Topol. 5
(2005), 1075–1109, DOI 10.2140/agt.2005.5.1075. MR2171804
[61] D. Fernández-Ternero, E. Macías-Virgós, E. Minuz, and J. A. Vilches, Simplicial Lusternik-
Schnirelmann category, Publ. Mat. 63 (2019), no. 1, 265–293, DOI 10.5565/PUBLMAT6311909.
MR3908794
[62] D. Fernández-Ternero, E. Macías-Virgós, N. A. Scoville, and J. A. Vilches, Strong discrete Morse
theory and simplicial L-S category: A discrete version of the Lusternik-Schnirelmann Theorem, Dis-
crete Comput. Geom. (2019), DOI 10.1007/s00454-019-00116-8.
[63] D. Fernández-Ternero, E. Macías-Virgós, and J. A. Vilches, Lusternik-Schnirelmann cat-
egory of simplicial complexes and finite spaces, Topology Appl. 194 (2015), 37–50, DOI
10.1016/j.topol.2015.08.001. MR3404603
[64] D. L. Ferrario and R. A. Piccinini, Simplicial structures in topology, CMS Books in Mathemat-
ics/Ouvrages de Mathématiques de la SMC, Springer, New York, 2011. Translated from the 2009
Italian original by Maria Nair Piccinini. MR2663748
[65] R. Forman, Morse theory for cell complexes, Adv. Math. 134 (1998), no. 1, 90–145, DOI
10.1006/aima.1997.1650. MR1612391
[66] R. Forman, Combinatorial differential topology and geometry, New perspectives in algebraic com-
binatorics (Berkeley, CA, 1996), Math. Sci. Res. Inst. Publ., vol. 38, Cambridge Univ. Press, Cam-
bridge, 1999, pp. 177–206. MR1731817
[67] R. Forman, Morse theory and evasiveness, Combinatorica 20 (2000), no. 4, 489–504, DOI
10.1007/s004930070003. MR1804822
[68] R. Forman, Combinatorial Novikov-Morse theory, Internat. J. Math. 13 (2002), no. 4, 333–368, DOI
10.1142/S0129167X02001265. MR1911862
[69] R. Forman, Discrete Morse theory and the cohomology ring, Trans. Amer. Math. Soc. 354 (2002),
no. 12, 5063–5085, DOI 10.1090/S0002-9947-02-03041-6. MR1926850
[70] R. Forman, A user’s guide to discrete Morse theory, Sém. Lothar. Combin. 48 (2002), Art. B48c.
MR1939695
[71] R. Forman, Some applications of combinatorial differential topology, Graphs and patterns in math-
ematics and theoretical physics, Proc. Sympos. Pure Math., vol. 73, Amer. Math. Soc., Providence,
RI, 2005, pp. 281–313, DOI 10.1090/pspum/073/2131018. MR2131018
[72] R. Freij, Equivariant discrete Morse theory, Discrete Math. 309 (2009), no. 12, 3821–3829, DOI
10.1016/j.disc.2008.10.029. MR2537376
[73] P. Frosini, A distance for similarity classes of submanifolds of a Euclidean space, Bull. Austral.
Math. Soc. 42 (1990), no. 3, 407–416, DOI 10.1017/S0004972700028574. MR1083277
[74] U. Fugacci, F. Iuricich, and L. De Floriani, Computing discrete Morse complexes from simplicial
complexes, Graph. Models 103 (2019), 1–14, DOI 10.1016/j.gmod.2019.101023. MR3936806
[75] E. Gawrilow and M. Joswig, polymake: a framework for analyzing convex polytopes, Polytopes—
combinatorics and computation (Oberwolfach, 1997), DMV Sem., vol. 29, Birkhäuser, Basel,
2000, pp. 43–73. MR1785292
[76] R. W. Ghrist, Elementary applied topology, edition 1.0, ISBN 978-1502880857, Createspace, 2014.
Bibliography 261
[77] L. C. Glaser, Geometrical combinatorial topology. Vol. I, Van Nostrand Reinhold Mathematics
Studies, vol. 27, Van Nostrand Reinhold Co., New York, 1970. MR3309564
[78] L. C. Glaser, Geometrical combinatorial topology. Vol. II, Van Nostrand Reinhold Mathematics
Studies, vol. 28, Van Nostrand Reinhold Co., London, 1972. MR3308955
[79] B. Green, N. A. Scoville, and M. Tsuruga, Estimating the discrete geometric Lusternik-
Schnirelmann category, Topol. Methods Nonlinear Anal. 45 (2015), no. 1, 103–116, DOI
10.12775/TMNA.2015.006. MR3365007
[80] D. Günther, J. Reininghaus, H. Wagner, and I. Hotz, Efficient computation of 3D Morse–Smale
complexes and persistent homology using discrete Morse theory, The Visual Computer 28 (2012),
no. 10, 959–969.
[81] M. Hachimori, Simplicial complex library, Available at http://infoshako.sk.tsukuba.ac.
jp/~hachi/math/library/index_eng.html.
[82] S. Harker, K. Mischaikow, M. Mrozek, and V. Nanda, Discrete Morse theoretic algorithms for com-
puting homology of complexes and maps, Found. Comput. Math. 14 (2014), no. 1, 151–184, DOI
10.1007/s10208-013-9145-0. MR3160710
[83] S. Harker, K. Mischaikow, M. Mrozek, V. Nanda, H. Wagner, M. Juda, and P. Dłotko, The efficiency
of a homology algorithm based on discrete Morse theory and coreductions, Proceedings of the 3rd
International Workshop on Computational Topology in Image Context (CTIC 2010), vol. 1, 2010,
pp. 41–47.
[84] A. Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002. MR1867354
[85] M. Henle, A combinatorial introduction to topology, Dover Publications, Inc., New York, 1994.
Corrected reprint of the 1979 original [Freeman, San Francisco, CA; MR0550879 (81g:55001)].
MR1280460
[86] G. Henselman and R. Ghrist, Matroid filtrations and computational persistent homology, arXiv
e-prints (2016), arXiv:1606.00199.
[87] M. Jöllenbeck and V. Welker, Minimal resolutions via algebraic discrete Morse theory, Mem. Amer.
Math. Soc. 197 (2009), no. 923, vi+74 pp, DOI 10.1090/memo/0923. MR2488864
[88] J. Jonsson, Simplicial complexes of graphs, Lecture Notes in Mathematics, vol. 1928, Springer-
Verlag, Berlin, 2008. MR2368284
[89] J. Jonsson, Introduction to simplicial homology, 2011, Available at https://people.kth.se/
~jakobj/homology.html.
[90] M. Joswig, F. H. Lutz, and M. Tsuruga, Heuristic for sphere recognition, Mathematical software—
ICMS 2014, Lecture Notes in Comput. Sci., vol. 8592, Springer, Heidelberg, 2014, pp. 152–159,
DOI 10.1007/978-3-662-44199-2_26. MR3334760
[91] M. Joswig and M. E. Pfetsch, Computing optimal Morse matchings, SIAM J. Discrete Math. 20
(2006), no. 1, 11–25, DOI 10.1137/S0895480104445885. MR2257241
[92] S. Jukna, Boolean function complexity, Algorithms and Combinatorics, vol. 27, Springer, Heidel-
berg, 2012. Advances and frontiers. MR2895965
[93] T. Kaczynski, K. Mischaikow, and M. Mrozek, Computational homology, Applied Mathematical
Sciences, vol. 157, Springer-Verlag, New York, 2004. MR2028588
[94] A. B. Kahn, Topological sorting of large networks, Communications of the ACM 5 (1962), no. 11,
558–562.
[95] J. Kahn, M. Saks, and D. Sturtevant, A topological approach to evasiveness, Combinatorica 4
(1984), no. 4, 297–306, DOI 10.1007/BF02579140. MR779890
[96] V. Kaibel and M. E. Pfetsch, Computing the face lattice of a polytope from its vertex-facet incidences,
Comput. Geom. 23 (2002), no. 3, 281–290, DOI 10.1016/S0925-7721(02)00103-7. MR1927137
[97] H. King, K. Knudson, and N. Mramor, Generating discrete Morse functions from point data, Ex-
periment. Math. 14 (2005), no. 4, 435–444. MR2193806
262 Bibliography
[98] H. King, K. Knudson, and N. Mramor Kosta, Birth and death in discrete Morse theory, J. Symbolic
Comput. 78 (2017), 41–60, DOI 10.1016/j.jsc.2016.03.007. MR3535328
[99] K. P. Knudson, Morse theory: Smooth and discrete, World Scientific Publishing Co. Pte. Ltd., Hack-
ensack, NJ, 2015. MR3379451
[100] K. Knudson and B. Wang, Discrete stratified Morse theory: a user’s guide, 34th International Sym-
posium on Computational Geometry, LIPIcs. Leibniz Int. Proc. Inform., vol. 99, Schloss Dagstuhl.
Leibniz-Zent. Inform., Wadern, 2018, Art. No. 54, 14 pp. MR3824298
[101] D. N. Kozlov, Complexes of directed trees, J. Combin. Theory Ser. A 88 (1999), no. 1, 112–122, DOI
10.1006/jcta.1999.2984. MR1713484
[102] D. N. Kozlov, Discrete Morse theory for free chain complexes (English, with English and
French summaries), C. R. Math. Acad. Sci. Paris 340 (2005), no. 12, 867–872, DOI
10.1016/j.crma.2005.04.036. MR2151775
[103] D. Kozlov, Combinatorial algebraic topology, Algorithms and Computation in Mathematics,
vol. 21, Springer, Berlin, 2008. MR2361455
[104] M. Krčál, J. Matoušek, and F. Sergeraert, Polynomial-time homology for simplicial Eilenberg-
MacLane spaces, Found. Comput. Math. 13 (2013), no. 6, 935–963, DOI 10.1007/s10208-013-9159-
7. MR3124946
[105] M. Kukieła, The main theorem of discrete Morse theory for Morse matchings with finitely many
rays, Topology Appl. 160 (2013), no. 9, 1074–1082, DOI 10.1016/j.topol.2013.04.025. MR3049255
[106] F. Larrión, M. A. Pizaña, and R. Villarroel-Flores, Discrete Morse theory and the homotopy
type of clique graphs, Ann. Comb. 17 (2013), no. 4, 743–754, DOI 10.1007/s00026-013-0204-7.
MR3129782
[107] D. Lay, Linear algebra and its applications, 4th ed., Pearson Prentice Hall, Upper Saddle River,
NJ, 2012.
[108] I.-C. Lazăr, Applications to discrete Morse theory: the collapsibility of CAT(0) cubical complexes of
dimension 2 and 3, Carpathian J. Math. 27 (2011), no. 2, 225–237. MR2906606
[109] T. Lewiner, H. Lopes, and G. Tavares, Optimal discrete Morse functions for 2-manifolds, Comput.
Geom. 26 (2003), no. 3, 221–233, DOI 10.1016/S0925-7721(03)00014-2. MR2005300
[110] M. Lin and N. A. Scoville, On the automorphism group of the Morse complex, arXiv e-prints (2019),
arXiv:1904.10907.
[111] N. Lingareddy, Calculating persistent homology using discrete Morse theory, 2018, Available at
http://math.uchicago.edu/~may/REU2018/REUPapers/Lingareddy.pdf.
[112] Y. Liu and N. A. Scoville, The realization problem for discrete Morse functions on trees, Algebra
Colloquium, to appear.
[113] A. T. Lundell and S. Weingram, The topology of CW complexes, The University Series in Higher
Mathematics, Van Nostrand Reinhold Co., New York, 1969. MR3822092
[114] L. Lusternik and L. Schnirelmann, Méthodes topologiques dans les problèmes variationnels, Her-
mann, Paris, 1934.
[115] H. Markram et al., Reconstruction and simulation of neocortical microcircuitry, Cell 163 (2015),
no. 2, 456–492, DOI 10.1016/j.cell.2015.09.029.
[116] J. Milnor, Morse theory, Based on lecture notes by M. Spivak and R. Wells. Annals of Mathematics
Studies, No. 51, Princeton University Press, Princeton, N.J., 1963. MR0163331
[117] J. Milnor, Lectures on the ℎ-cobordism theorem, Notes by L. Siebenmann and J. Sondow, Princeton
University Press, Princeton, N.J., 1965. MR0190942
[118] E. G. Minian, Some remarks on Morse theory for posets, homological Morse theory and finite
manifolds, Topology Appl. 159 (2012), no. 12, 2860–2869, DOI 10.1016/j.topol.2012.05.027.
MR2942659
Bibliography 263
[119] K. Mischaikow and V. Nanda, Morse theory for filtrations and efficient computation of persistent
homology, Discrete Comput. Geom. 50 (2013), no. 2, 330–353, DOI 10.1007/s00454-013-9529-6.
MR3090522
[120] J. W. Moon, Counting labelled trees, From lectures delivered to the Twelfth Biennial Seminar
of the Canadian Mathematical Congress (Vancouver, 1969), Canadian Mathematical Congress,
Montreal, Que., 1970. MR0274333
[121] J. Morgan and G. Tian, Ricci flow and the Poincaré conjecture, Clay Mathematics Monographs,
vol. 3, American Mathematical Society, Providence, RI; Clay Mathematics Institute, Cambridge,
MA, 2007. MR2334563
[122] F. Mori and M. Salvetti, (Discrete) Morse theory on configuration spaces, Math. Res. Lett. 18 (2011),
no. 1, 39–57, DOI 10.4310/MRL.2011.v18.n1.a4. MR2770581
[123] F. Mori and M. Salvetti, Discrete topological methods for subspace arrangements, Arrangements
of hyperplanes—Sapporo 2009, Adv. Stud. Pure Math., vol. 62, Math. Soc. Japan, Tokyo, 2012,
pp. 293–321.
[124] M. Morse, Relations between the critical points of a real function of 𝑛 independent variables, Trans.
Amer. Math. Soc. 27 (1925), no. 3, 345–396, DOI 10.2307/1989110. MR1501318
[125] M. Morse, Functional topology and abstract variational theory, Ann. of Math. (2) 38 (1937), no. 2,
386–449, DOI 10.2307/1968559. MR1503341
[126] M. Morse, The calculus of variations in the large, American Mathematical Society Colloquium
Publications, vol. 18, American Mathematical Society, Providence, RI, 1996. Reprint of the 1932
original. MR1451874
[127] M. Mrozek and B. Batko, Coreduction homology algorithm, Discrete Comput. Geom. 41 (2009),
no. 1, 96–118, DOI 10.1007/s00454-008-9073-y. MR2470072
[128] V. Nanda, D. Tamaki, and K. Tanaka, Discrete Morse theory and classifying spaces, Adv. Math. 340
(2018), 723–790, DOI 10.1016/j.aim.2018.10.016. MR3886179
[129] L. Nicolaescu, An invitation to Morse theory, 2nd ed., Universitext, Springer, New York, 2011.
MR2883440
[130] A. Nikitenko, Discrete Morse theory for random complexes, Ph.D. thesis, Institute of Science and
Technology Austria, 2017.
[131] R. O’Donnell, Analysis of Boolean functions, Cambridge University Press, New York, 2014.
MR3443800
[132] P. Orlik and V. Welker, Algebraic combinatorics, Universitext, Springer, Berlin, 2007. Lectures
from the Summer School held in Nordfjordeid, June 2003. MR2322081
[133] S. Y. Oudot, Persistence theory: from quiver representations to data analysis, Mathematical Surveys
and Monographs, vol. 209, American Mathematical Society, Providence, RI, 2015. MR3408277
[134] V. V. Prasolov, Elements of homology theory, Graduate Studies in Mathematics, vol. 81, American
Mathematical Society, Providence, RI, 2007. Translated from the 2005 Russian original by Olga
Sipacheva. MR2313004
[135] M. Reimann et al., Cliques of neurons bound into cavities provide a missing link between structure
and function, Front. Comput. Neurosci. (2017), 12 June, DOI 10.3389/fncom.2017.00048.
[136] R. Reina-Molina, D. Díaz-Pernil, P. Real, and A. Berciano, Membrane parallelism for discrete
Morse theory applied to digital images, Appl. Algebra Engrg. Comm. Comput. 26 (2015), no. 1-
2, 49–71, DOI 10.1007/s00200-014-0246-z. MR3320905
[137] J. J. Rotman, An introduction to algebraic topology, Graduate Texts in Mathematics, vol. 119,
Springer-Verlag, New York, 1988. MR957919
[138] C. P. Rourke and B. J. Sanderson, Introduction to piecewise-linear topology, Springer Study Edition,
Springer-Verlag, Berlin-New York, 1982. Reprint. MR665919
[139] A. Sawicki, Discrete Morse functions for graph configuration spaces, J. Phys. A 45 (2012), no. 50,
505202, 25, DOI 10.1088/1751-8113/45/50/505202. MR2999710
264 Bibliography
[140] N. A. Scoville and W. Swei, On the Lusternik-Schnirelmann category of a simplicial map, Topology
Appl. 216 (2017), 116–128, DOI 10.1016/j.topol.2016.11.015. MR3584127
[141] N. A. Scoville and K. Yegnesh, A persistent homological analysis of network data flow malfunctions,
Journal of Complex Networks 5 (2017), no. 6, 884–892.
[142] E. Sköldberg, Morse theory from an algebraic viewpoint, Trans. Amer. Math. Soc. 358 (2006), no. 1,
115–129, DOI 10.1090/S0002-9947-05-04079-1. MR2171225
[143] S. Smale, The generalized Poincaré conjecture in higher dimensions, Bull. Amer. Math. Soc. 66
(1960), 373–375, DOI 10.1090/S0002-9904-1960-10458-2. MR0124912
[144] S. Smale, Marston Morse (1892–1977), Math. Intelligencer 1 (1978/79), no. 1, 33–34, DOI
10.1007/BF03023042. MR0490936
[145] R. E. Stong, Finite topological spaces, Trans. Amer. Math. Soc. 123 (1966), 325–340, DOI
10.2307/1994660. MR0195042
[146] K. Tanaka, Lusternik-Schnirelmann category for cell complexes and posets, Illinois J. Math. 59
(2015), no. 3, 623–636. MR3554225
[147] K. Tanaka, Lusternik-Schnirelmann category for categories and classifying spaces, Topology Appl.
239 (2018), 65–80, DOI 10.1016/j.topol.2018.02.031. MR3777323
[148] H. Vogt, Leçons sur la résolution algébrique des équations, Nony, Paris, 1895.
[149] P. Škraba, Persistent homology and machine learning (English, with English and Slovenian sum-
maries), Informatica (Ljubl.) 42 (2018), no. 2, 253–258. MR3835054
[150] D. Weiller, Smooth and discrete Morse theory, Bachelor’s thesis, Australian National University,
2015.
[151] J. H. C. Whitehead, Simplicial spaces, nuclei and m-groups, Proc. London Math. Soc. (2) 45 (1939),
no. 4, 243–327, DOI 10.1112/plms/s2-45.1.243. MR1576810
[152] J. H. C. Whitehead, Simple homotopy types, Amer. J. Math. 72 (1950), 1–57, DOI 10.2307/2372133.
MR0035437
[153] E. Witten, Supersymmetry and Morse theory, J. Differential Geom. 17 (1982), no. 4, 661–692 (1983).
MR683171
[154] C.-K. Wu and D. Feng, Boolean functions and their applications in cryptography, Advances in
Computer Science and Technology, Springer, Heidelberg, 2016. MR3559545
[155] M. C. B. Zaremsky, Bestvina-Brady discrete Morse theory and Vietoris-Rips complexes, arXiv e-
prints (2018), arXiv:1812.10976.
[156] R. E. Zax, Simplifying complicated simplicial complexes: discrete Morse theory and its applications,
Bachelor’s thesis, Harvard University, 2012.
[157] E. C. Zeeman, On the dunce hat, Topology 2 (1964), 341–358, DOI 10.1016/0040-9383(63)90014-4.
MR0156351
[158] A. Zorn, Discrete Morse theory on simplicial complexes, 2009, Available at http://www.math.
uchicago.edu/~may/VIGRE/VIGRE2009/REUPapers/Zorn.pdf.
Notation and symbol
index
(𝕜∗ , 𝜕∗ ), 89 𝑃𝑖𝑛 , 150
(𝑓𝑖 )∗ , 197 𝑆 𝑛 , 22
∗, 22 𝑇 2 , 23
𝐵 𝑆 (𝑣, 𝑒), 242 𝑇𝑖𝑛 , 152
𝑓
𝐵𝑖 (𝑘), 118 𝑉(𝐾), 17
𝐶𝐾, 23 𝑉 ≅ 𝑊, 199
𝐷, 25 𝑉𝑓 , 57
𝐷𝑓 , 145 𝑉𝑝 (𝜎), 188
𝐹𝑒 , 242 [𝛼, 𝛽], 70
𝐺 = (𝑀, 𝐶), 228 [𝑣𝑛 ], 17
𝐻𝑝𝜍,𝜏 , 144 ⟨⋅⟩, 21
𝐻𝑖 (𝐾), 92 𝕜𝑛 , 84
𝐾(𝑐), 112 𝕜Φ∗ (𝐾), 195
𝐾 ∗ 𝐿, 35 𝕜Φ𝑝 (𝐾), 195
𝐾 − {𝑣}, 236 Δ𝑛 , 22
𝐾 ≈ 𝐿, 240 𝔽2 , 83
𝐾 ∼ 𝐿, 31 Γ𝑓 , 153
𝐾 𝑖 , 19 ℋ𝑉 , 66
𝐾0 , 240 ℳ𝑝𝑖 , 201
𝐾𝑖 , 87 Φ∞ (𝑐), 196
𝐿𝑖,𝑗 (𝐺), 183 Φ𝑝 , 190
𝑃 2 , 24 Σ𝐾, 36
265
266 INDEX
𝑓,̄ 143 𝑔|𝐾 , 74
𝛽𝑝𝜍,𝜏 , 144 scat(𝐾), 250
𝜎, 19 scat(𝑓), 249
𝜒(𝐾), 30 scrit(ℳ), 244
dim(𝐾), 19 scrit(𝑓), 245
Im(𝐴), 85 ↘, 31
ker(𝐴), 85 ↘↘, 237
←𝑉 , 138 𝜎_, 144
link𝐾 (𝑉), 166 𝜎(𝑖) , 20
ℋ(𝑖), 64 𝜎0 → 𝜎1 → ⋯ → 𝜎𝑛 , 60
ℋ𝐾 , 64 star𝐾 (𝑣), 166
𝒦, 25 𝜏 < 𝜎, 19
ℳ, 26 0,⃗ 83
ℳ(𝐾), 171 𝑐𝐾⃗ , 19
ℳ𝑃 (𝐾), 179
𝑓,⃗ 74
𝒫([𝑣𝑛 ]), 22
𝑏𝑖 (𝐾), 92
ℛ(𝜎), 178
𝑐(𝑀), 157
𝒮(ℛ), 179
𝑐𝑖 , 19
low(𝑗), 126
𝑑(𝑓, 𝑔), 134
maxf0 (𝜎), 210
𝑑𝐵 (𝑋, 𝑌 ), 146
↗, 31
𝑒𝑝 (𝐴), 158
↗↗, 237
null(𝐴), 85 𝑓 ∼ 𝑔, 239
⊕, 96 𝑓 ∼𝑐 𝑔, 238
ℝ, 145 𝑔 ≤ 𝑓, 175
𝜕𝐾 (𝜎) (of a simplex), 21 ℎ𝑡 , 107
𝜕𝑖 (linear transformation), 89 𝑖𝜍,𝜏 , 144
𝑓
≺, 138 𝑚𝑖 , 74
𝑛
≺𝑉 , 138 ( ), 27
𝑘
≺𝑓 , 139 𝑣0 𝑣1 ⋯ 𝑣𝑛 , 20
rank(𝐴), 85 2–1, 44
Index
𝑉-path, 60, 109–111, 159, 217 ExtractCancel, 218
closed, 61, 65 ExtractRaw, 216
extending, 110 for discrete Morse matching,
maximal, 61 228
non-trivial, 60 Hasse diagram, 224
𝑛-simplex, 22, 31, 34, 159, 164, Argentinean complex, 238,
181 246, 250, 255
generalized discrete Morse
function, 72 Betti number, 92, 98, 118, 158,
is collapsible, 37 205, 221
not a Morse complex, 176 bounded by evaders, 166
perfect discrete Morse is an invariant, 96
function, 105 of 𝑆 2 , 95
𝑛-sphere, 22, 31, 34 of collapsible complex, 97
Betti numbers, 98 of punctured spheres, 99
perfect discrete Morse of spheres, 98
function, 105 relation to critical simplices,
two critical simplices, 55 101
relation to Euler
algorithm characteristic, 95
B-L algorithm binomial coefficient, 27, 28
detects collapsibility, 78 Björner’s example, 26
B-L algorithm, 76 homology, 208
Cancel, 217 Boolean function, 149
267
268 INDEX
constant, 152 strongly, 237, 241, 254
evasive, 151, 152 one critical simplex, 248
hider, 149 commutative diagram, 197, 252
induced simplicial complex, consistent, see also gradient
154 vector field, consistent
monotone, 152, 154 with discrete Morse
projection, 150, 154, 159 function
seeker, 149 contiguity, 252
threshold, 152, 153, 155, 156 contiguity class, 239
boundary of a point, 239
of a simplex, 21, 87, 89 of the identity, 239
boundary operator, 89, 90, 95, contiguous, 238
190, 195, 197 with the identity, 240
commutes with flow, 193 critical complex, 188, 201, 204,
twice is 0, 91 220, 223
homology, 203
canceling critical simplices, CW complex, 103
106, 110, 217
categorical cover, 249, 251
chain, 88, 197 decision tree algorithm, 155,
chain complex, 88, 195, 197, 158
202 complexity, 157
split, 96, 97 evader, 157, 164
subchain complex, 96, 97 induced gradient vector
chain map, 197, 198 field, 159
collapse, 31, 32, 37, 42, 76, 96, discrete Morse function
115, 116, 159, 160, 164, pseudo-discrete Morse
234, 240 function
strong, 236, 237 linear combination, 137
collapse theorem, 115, 121, 141 discrete Morse function, 48, 77,
generalized, 116 108
strong, 247, 254 excellent
collapsible, 74, 78, 105, 181 homological sequence,
Betti numbers, 97 121
does not imply nonevasive, all simplices critical, 52
167 basic, 44, 47, 50, 52, 55, 106,
one critical simplex, 55 123
INDEX 269
consistent with a gradient discrete Morse matching, 170,
vector field, 137 171, 213, 222, 228
consistent with a gradient critical object, 244
vector field, 135, 137, 141 critical pair, 242
critical simplex, 46, 51, 53, index, 244
62, 112, 189 critical simplex, 244
at least one, 53 strong, 244
critical value, 46, 51 trivial, 228
excellent, 55, 55, 120 discrete Morse spectrum, 77
flat, 106, 120, 137, 245 discrete Morse vector, 74, 77,
flat pseudo-discrete Morse 104
function, 141 of collapsible complex, 74
flattening, 106, 120, 142, 143 discrete vector field, 58, 68, 159
generalized, 71, 116, 247 generalized, 71, 116
sum of, 73 induced partial order, 138
Hasse equivalent, 171 not a gradient vector field, 59
level subcomplex, 112, 118 relation to gradient vector
optimal, 74, 76, 78, 104, 204, field, 61, 69
220 relation to Hasse diagram, 68
not unique, 76 distance, 134
perfect, 104, 105, 119, 182 between discrete Morse
non-existence of, 105 functions, 134, 143, 147
primitive, 174 Dunce cap, 25, 38, 105
compatible, 175 Betti number, 95
pseudo-discrete Morse not collapsible, 38
function, 135, 137
pure, 143 Euler characteristic, 28, 30, 38,
regular simplex, 46, 51 251
regular value, 46, 51 invariance of, 33
strong, 245, 254 of Δ𝑛 , 31
critical object, 254 of 𝑛-sphere, 31
sum of, 247 of a graph, 182
weakly increasing, 44 of a point, 35
discrete Morse graph, 228, 229 of suspension, 37
discrete Morse inequalities relation to Betti numbers, 95
strong, 103, 104, 205 relation to critical simplices,
weak, 101, 104, 165, 187 101
270 INDEX
exclusion lemma, 49, 52, 58, induced by decision tree
115, 178, 213 algorithm, 159
expansion, 31, 32, 96, 164 matching, 57
strong, 237 maximum, 182, 183, 184
minimal, 141, 141, 142
filtration, 123 tail, 42, 57, 57, 58, 189, 213
flow, 190 graph, 23, 30, 49, 56, 69
commutes with boundary, adjacent, 183
193 complete graph, 251
stabilize, 191, 192, 193, 195, connected, 177
199 counting spanning trees, 184
degree, 183
flow complex, 195, 198
edge, 177
homology of, 199
forest, 178, 251
Forman equivalence, 53, 69,
root, 178
108, 171
Laplacian, 183, 185
relation to gradient vector
Morse complex, 176
field, 62
perfect discrete Morse
with a 1–1 discrete Morse
function, 105
function, 55, 102
rooted forest, 178, 178
with a flat discrete Morse
spanning tree, 182
function, 106
subgraph, 251
free pair, 31, 42
tree, 45, 177, 181
leaf, 159, 181
gradient vector field, 43, 56, 57, vertex, 177
62, 107–109, 111, 135, 170,
188 Hasse diagram, 64, 71, 139,
consistent with discrete 170, 223, 228, 242
Morse function, 137 algorithm to construct, 224
arrow, 57, 58 directed, 66, 69, 170, 171,
consistent with discrete 214, 221
Morse function, 135, 137, directed cycle, 67, 68
141 downward, 66, 228
counting maximum, 184 Forman vector, 242
critical simplex, 58 is poset, 64
gradient path, 60, 217 level, 64, 68
head, 42, 57, 57, 58, 189, 213 matching, 169, 171
INDEX 271
acyclic, 169, 171, 228 join, see also simplicial
node, 64, 228 complex, join
strong vector, 242, 244
upward, 66, 223, 228 Kirchoff’s theorem, 184
Hasse equivalent, 171 Klein bottle, 25
heresy, 157 homology, 205
homological sequence
of excellent discrete Morse linear extension, 139
function, 121 linear transformation, 84, 88,
homological equivalence, 119, 124, 197, 198
123, 143 image, 95, 197
homological sequence, 118, 129 inclusion, 144, 198
does not imply Forman kernel, 85, 95, 197
equivalence, 120 nullity, 85, 92, 197
homology, 197, 199 rank, 85, 92, 197
critical complex, 203, 205 invariant under row
removing a facet, 98 operations, 85
simplicial, 81, 92 well-defined, 198
homotopy lower star filtration, 210
between flat pseudo-discrete
Möbius band, 26, 35, 48, 108
Morse functions, 147
matrix
of discrete Morse functions,
eigenvalue, 184
107, 109, 135
leading coefficient, 85
straight-line, 107, 147
row echelon form, 85
maximizer, 195, 196, 202
identity map, 198, 198, 240 Morse complex, 171, 175
inclusion, see also linear bijection with rooted forest,
transformation/simplicial 178
map, inclusion of a tree is collapsible, 181
index of a pair, 157, 244 of rooted forest, 177
interval, 70, 247 pure, 179, 181, 183
singular, 71, 116 counting facets, 184
invariant, 33, 33, 97 counting facets, 183
of strong homotopy type, 254 Morse matching, see also
iterated discrete Morse discrete Morse matching
decomposition, 230 multiset, 145
272 INDEX
individual elements, 146 boundary, 21, 22, 89, 228
multiplicity, 145 codimension, 22, 125
coface, 19, 31, 49
partially ordered set, see also critical, 55, 62, 71, 101, 115,
poset 193
Pascal’s rule, 28 face, 19, 31
persistence, 213 facet, 21, 21, 181, 223, 239
persistent homology, 124, 144 join, 212
birth, 127 regular, 52
bar code, 128 simplicial category
Betti number, 144 of a complex, 250, 254
birth, 144 of a map, 249, 252
bottleneck distance, 146, 147 of Argentinean complex, 255
death, 127, 144 of suspension, 251
persistence diagram, 132, of union, 254
145 simplicial complex, 17
well-defined, 145 𝑐-vector, 19, 20, 30, 74
persistence pair, 144 collapsible, 35, 36, 37, 116,
point at infinity, 132, 144 166
poset, 63, 63, 64, 138, 139 cone, 23, 36
consistent, 139, 142 is collapsible, 36
projective plane, 24 core, 240, 241
homology, 207 is unique, 241
covering, 249
rank-nullity theorem, 85, 95, 98 dimension, 19, 20, 33
restriction of a function, 74, evasive, 157, 167
102, 195, 201, 249 facet, 21, 77, 98, 183, 184, 235
generated by simplex, 19, 22,
simple homotopy type, 34 250
simple homotopy type, 31, generated by simplices, 22,
32–34, 37, 95, 97, 176 22
of Δ𝑛 , 37 induced by Boolean
of a point, 31, 35 function, 153, 154
of cone, 37 isomorphic, 241
of spheres, 98 isomorphism, 240, 241
simplex join, 35
codimension, 21 minimal, 239, 240
INDEX 273
nonevasive, 157, 166 strong homotopy type, 237,
implies collapsible, 166 240, 241
pure, 179 of a point, 237, 241
simplex, 19 strongly homotopic, 239, 252
skeleton, 19
subcomplex, 19, 22, 123 topological sorting, 229
maximal, 210 torus, 23, 30, 47, 58
suspension, 36, 37, 251 total ordering, 124, 160, 229
vertex, 17 strict, 138, 140
dominate, 235, 236, 239, triangle inequality, 135, 148
240, 255 uniform norm, 134
link, 166, 210
lower link, 210 vector space, 81, 83, 188, 197
star, 166, 210 basis element, 84
vertex set, 17, 19, 33 chain, 88, 89
simplicial map, 234, 235–237, dimension, 84
239 direct sum, 96
inclusion, 234, 240 generated by set, 84, 88
null-homotopic, 239, 250 isomorphic, 199
retraction, 236 isomorphism, 198, 201, 203
strong homotopy linear combination, 84, 193,
equivalence, 240, 241, 254 195
stability theorem, 147 vertex refinement, 116
Discrete Morse theory is a powerful tool combining
Photo credit: Jennifer Scoville
ideas in both topology and combinatorics. Invented
by Robin Forman in the mid 1990s, discrete Morse
theory is a combinatorial analogue of Marston
Morse’s classical Morse theory. Its applications are
vast, including applications to topological data
analysis, combinatorics, and computer science.
This book, the first one devoted solely to discrete Morse theory, serves
as an introduction to the subject. Since the book restricts the study
of discrete Morse theory to abstract simplicial complexes, a course in
mathematical proof writing is the only prerequisite needed. Topics
covered include simplicial complexes, simple homotopy, collapsibility,
gradient vector fields, Hasse diagrams, simplicial homology, persistent
homology, discrete Morse inequalities, the Morse complex, discrete
Morse homology, and strong discrete Morse functions. Students of
computer science will also find the book beneficial as it includes topics
such as Boolean functions, evasiveness, and has a chapter devoted to
some computational aspects of discrete Morse theory. The book is
appropriate for a course in discrete Morse theory, a supplemental text
to a course in algebraic topology or topological combinatorics, or an
independent study.
For additional information
and updates on this book, visit
www.ams.org/bookpages/stml-90
STML/90