Dynamical Properties of Hamilton-Jacobi Equations Via The Nonlinear Adjoint Method Large Time Behavior and Discounted Approximation
Dynamical Properties of Hamilton-Jacobi Equations Via The Nonlinear Adjoint Method Large Time Behavior and Discounted Approximation
Hiroyoshi Mitake1
Hung V. Tran2
Department of Mathematics,
University of Wisconsin Madison,
Van Vleck hall, 480 Lincoln drive, Madison, WI 53706, USA
December 4, 2016
3
4 CONTENTS
4 Appendix 75
4.1 Motivation and Examples . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.1.1 Front propagation problems . . . . . . . . . . . . . . . . . . . . 76
4.1.2 Optimal control problems . . . . . . . . . . . . . . . . . . . . . 78
4.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.3 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.4 Comparison principle and Uniqueness . . . . . . . . . . . . . . . . . . . 84
4.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.6 Lipschitz estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.7 The Perron method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Introduction
These notes are based on the two courses given by the authors at the summer school
on “PDE and Applied Mathematics” at Vietnam Institute for Advanced Study in
Mathematics (VIASM) from July 14 to July 25, 2014. The first course was about the
basic theory of viscosity solutions, and and the second course was about asymptotic
analysis of Hamilton–Jacobi equations. In particular, we focused on the large time
asymptotics of solutions and the selection problem of the discounted approximation.
We study both the inviscid (or first-order) Hamilton–Jacobi equation
Here, u : Rn × [0, ∞) → R is the unknown, and ut , Du, ∆u denote the time derivative,
the spatial gradient and the Laplacian of u, respectively. The Hamiltonian H : Rn ×
Rn → R is a given continuous function. We will add suitable assumptions later when
needed. At some points, we also consider the general (possibly degenerate) viscous
Hamilton–Jacobi equation:
( )
ut (x, t) − tr A(x)D2 u(x, t) + H(x, Du(x, t)) = 0 for x ∈ Rn , t > 0, (0.3)
where D2 u denotes the Hessian of u, and A : Rn → Mn×n sym is a given continuous diffusion
matrix, which is nonnegative definite and possibly degenerate. Here, Msym n×n
is the set
of n × n real symmetric matrices, and for S ∈ Msym , tr (S) denotes the trace of matrix
n×n
5
6 CONTENTS
the adjoint equation of its linearized operator to derive new information about the
solution, which could not be obtained by previous techniques. Evans introduced this
method to study the gradient shock structures of the vanishing viscosity procedure
of viscosity solutions. With Cagnetti, Gomes, the authors used this method to study
the large-time behavior of solutions to (0.3). Another interesting topic is about the
selection problem in the discounted approximation setting. This was studied by Davini,
Fathi, Iturriaga, Zavidovique [26] by using a dynamical approach, and the authors [71]
by using a nonlinear adjoint method.
The outline of the lecture notes is as follows. In Chapter 1, we investigate the
ergodic problems associated with (0.1)–(0.3). In particular, we prove the existence
of solutions to the ergodic problems. In Chapters 2 and 3, we study the large time
behavior of solutions to (0.1)–(0.3), and the selection problem for the discounted ap-
proximation, respectively. To make the lecture notes self-contained, we prepare a brief
introduction to the theory of viscosity solutions of first-order equations in Appendix.
Appendix can be read independently from other chapters. Also, we note that Chapters
2 and 3 can be read independently.
It is worth pointing out that these lecture notes reflect the state of the art of the
subject by the end of summer 2014. We will address some up-to-date developments at
the end of Chapters 2 and 3.
Acknowledgement. The work of HM was partially supported by JSPS grants: KAK-
ENHI #15K17574, #26287024, #16H03948, and the work of HT was partially sup-
ported by NSF grants DMS-1361236 and DMS-1615944. These lecture notes were
started while the authors visited Vietnam Institute for Advanced Study in Mathemat-
ics (VIASM) in July 2014. The authors are extremely grateful for their hospitality.
The authors thank the anonymous referee for carefully reading the previous version
of the manuscript and valuable comments. The authors also thank Son Van for his
thoughtful suggestions.
Notations
• For x ∈ Rn and r > 0, B(x, r) denotes the open ball with center x and radius r,
that is, B(x, r) = {y ∈ Rn : |y − x| < r}.
• Mn×n
sym is the set of n × n real symmetric matrices.
• For S ∈ Mn×n
sym , tr (S) denotes the trace of matrix S.
• For A, B ∈ Mn×n
sym , A ≥ B (or B ≤ A) means that A − B is nonnegative definite.
• Given a metric space X, C(X), USC (X), LSC (X) denote the space of all contin-
uous, upper semicontinuous, lower semicontinuous functions in X, respectively.
Let Cc (X) denote the space of all continuous functions in X with compact sup-
port.
• For any interval J ⊂ R, AC (J, Rm ) is the set of all absolutely continuous functions
in J with value in Rm .
• For U ⊂ Rn open, k ∈ N and α ∈ (0, 1], C k (U ) and C k,α (U ) denote the space
of all functions whose k-th order partial derivatives are continuous and Hölder
continuous with exponent α in U , respectively. Also C ∞ (U ) is the set of all
infinitely differentiable functions in U .
• L∞ norm of u in U is defined as
7
8 CONTENTS
• We use the letter C to denote any constant which can be explicitly computed in
terms of known quantities. The exact value of C could change from line to line
in a given computation.
Chapter 1
1.1 Motivation
One of our main goals in the lecture note is to understand the large-time behavior of
the solutions to various Hamilton–Jacobi type equations. We cover both the first-order
and the second-order cases. The first-order equation is of the form
{
ut + H(x, Du) = 0 in Rn × (0, ∞),
(1.1)
u(x, 0) = u0 (x) on Rn .
It is often the case that we need to guess an expansion form of u(x, t) when we do not
know yet how it behaves as t → ∞. Let us consider a formal asymptotic expansion of
u(x, t)
u(x, t) = a1 (x)t + a2 (x) + a3 (x)t−1 + . . . ,
9
10 CHAPTER 1. ERGODIC PROBLEMS FOR HJ EQUATIONS
where ai ∈ C ∞ (Rn ) for all i ≥ 1. Plug this into equation (1.1) to yield
In view of (1.4), we should have Da1 (x) ≡ 0 as other terms are bounded with respect
to t as t → ∞, which therefore implies that the function a1 should be constant. Thus,
there exists c ∈ R such that a1 (x) ≡ −c for all x ∈ Rn . Set v(x) = a2 (x) for x ∈ Rn .
From this observation, we expect that the large-time behavior of the solution to (1.1)
is
u(x, t) − (v(x) − ct) → 0 locally uniformly for x ∈ Rn as t → ∞, (1.5)
for some function v and constant c. Moreover, if convergence (1.5) holds, then by the
stability result of viscosity solutions (see Section 5.5), the pair (v, c) satisfies
Therefore, in order to investigate whether convergence (1.5) holds or not, we first need
to study the well-posedness of the above problem. We call it an ergodic problem for
Hamilton–Jacobi equations. This ergodic problem will be one of the main objects in
the next section.
Remark 1.1. One may wonder why we do not consider terms like bi (x)ti for i ≥ 2 in
the above formal asymptotic expansion of u. We will give a clear explanation at the
end of this chapter.
For second-order equations, we consider both the ergodic problem for the viscous case
and, more generally, the ergodic problem for the possibly degenerate viscous case
( )
−tr A(x)D2 v(x) + H(x, Dv) = c in Tn . (1.8)
In all cases, we seek for a pair of unknowns (v, c) ∈ C(Tn ) × R so that v solves the
corresponding equation in the viscosity sense.
1.2. EXISTENCE OF SOLUTIONS TO ERGODIC PROBLEMS 11
Theorem 1.1. Assume that H ∈ C(Tn × Rn ) and that H satisfies (1.4). Then there
exists a pair (v, c) ∈ Lip (Tn ) × R such that v solves (1.6) in the viscosity sense.
δv δ + H(x, Dv δ ) = 0 in Tn . (1.9)
Remark 1.2. Let us notice that the approximation procedure above using (1.9) is
called the discounted approximation procedure. It is a very natural procedure in many
ways. Firstly, the approximation makes equation (1.9) strictly monotone in v δ , which
fits perfectly in the well-posedness setting of viscosity solutions. See Section 4.1.2 for
the formula of v δ in terms of optimal control.
Secondly, for wδ (x) = δv δ (x/δ), wδ solves
(x )
wδ + H , Dwδ = 0 in Tn ,
δ
which is the setting to study an important phenomenon called homogenization.
12 CHAPTER 1. ERGODIC PROBLEMS FOR HJ EQUATIONS
The arguments in the proof of Theorem 1.1 are soft as we just use a priori estimate
(1.10) on |Dv δ | and the Arzelà–Ascoli theorem
{ δ to get the} result. In particular, from
this argument, we only know convergence of v − v (x0 ) j via the subsequence {δj }j .
j δj
{ }
It is not clear at all at this moment whether v δ − v δ (x0 ) δ>0 converges uniformly as
δ → 0 or not. We will come back to this question and give a positive answer under
some additional assumptions in Chapter 3.
Let us now provide the existence proof for the viscous case. To do this, we need a
sort of superlinearity condition on H:
( )
1
lim H(x, p) + Dx H(x, p) · p = +∞ uniformly for x ∈ Tn .
2
(1.11)
|p|→∞ 2n
Theorem 1.2. Assume that H ∈ C 2 (Tn × Rn ) and that H satisfies (1.11). Then there
exists a pair (v, c) ∈ Lip (Tn ) × R such that v solves (1.7) in the viscosity sense.
Proof. The proof is based on the standard Bernstein method. For δ > 0, consider the
approximate problem
δv δ − ∆v δ + H(x, Dv δ ) = 0 in Tn . (1.12)
By repeating the first step in the proof of Theorem 1.1, we obtain the existence of a
solution v δ to the above. Note that in this case, by the classical regularity theory for
elliptic equations, v δ is smooth due to the appearance of the diffusion ∆v δ .
Differentiate (1.12) with respect to xi to get
we obtain
∆ϕ = |D2 v δ |2 + ∆vxδ i vxδ i .
Thus, ϕ satisfies
1 1
|D2 v δ (x0 )|2 ≥ |∆v δ (x0 )|2 ≥ H(x0 , Dv δ (x0 ))2 − C
n 2n
for some C > 0. Thus,
1
H(x0 , Dv δ (x0 ))2 + Dx H(x0 , Dv δ (x0 )) · Dv δ (x0 ) ≤ C.
2n
In light of (1.11), we get a priori estimate kDv δ kL∞ (Tn ) ≤ C. This is enough to get the
conclusion as in the proof of Theorem 1.1.
Here is a generalization of Theorems 1.1 and 1.2 to the degenerate viscous setting.
We use the following assumptions:
We remark that (1.13) is also a sort of superlinearity condition. It is clear that (1.13)
is stronger than (1.11). We need to require a bit more than (1.11) as we deal with the
general diffusion matrix A here. We use the label (H1) for the assumptions on A as it
is one of the main assumptions in the lecture notes, and we will need to use it later.
Theorem 1.3. Assume that A satisfies (H1). Assume further that H ∈ C 2 (Tn × Rn )
and H satisfies (1.13). Then there exists a pair (v, c) ∈ Lip (Tn ) × R such that v solves
(1.8) in the viscosity sense.
for δ > 0.
Next, for α > 0, consider the equation
( )
δv α,δ − tr A(x)D2 v α,δ + H(x, Dv α,δ ) = α∆v α,δ in Tn . (1.15)
Owing to the discount and viscosity terms, there exists a (unique) classical solu-
tion v α,δ . By the comparison principle, it is again clear that |δv α,δ | ≤ M for M =
maxx∈Tn |H(x, 0)|.
14 CHAPTER 1. ERGODIC PROBLEMS FOR HJ EQUATIONS
We use the Bernstein method again. As in the proof of Theorem 1.2, differentiate
(1.15) with respect to xi , multiplying it by vxα,δ
i
and summing up with respect to i to
obtain
2δϕ − aij
xk vxi xj vxk − a (ϕxi xj − vxi xk vxj xk )
α,δ α,δ ij α,δ α,δ
xk vxi xj vxk + Dx H · Dv
−aij α,δ α,δ α,δ
+ aij vxα,δ v α,δ + α|D2 v α,δ |2 ≤ 0.
i xk xj xk
(1.16)
The two terms aij vxα,δ v α,δ and α|D2 v α,δ |2 are the good terms, which will help us to
i xk xj xk
control other terms and to deduce the result.
We first use the trace inequality (see [76, Lemma 3.2.3] for instance),
(tr (Axk S))2 ≤ Ctr (SAS) for all S ∈ Msym
n×n
, 1 ≤ k ≤ n,
for some constant C depending only on n and kD2 AkL∞ (Tn ) to yield that, for some
small constant d > 0,
( )2 C
aij α,δ α,δ
)vxk ≤ d tr (Axk D2 v α,δ ) + |Dv α,δ |2
2 α,δ α,δ
xk vxi xj vxk = tr (Axk D v
d
1 1
≤ tr (D2 v α,δ AD2 v α,δ ) + C|Dv α,δ |2 = aij vxα,δ v α,δ + C|Dv α,δ |2 .
i xk xj xk
(1.17)
2 2
Next, by using a modified Cauchy-Schwarz inequality for matrices (see Remark 1.3
(tr AB)2 ≤ tr (ABB)tr A for all A, B ∈ Msym
n×n
, with A ≥ 0 (1.18)
we obtain
( )2
aij vxα,δ
i xj
= (tr A(x)D2 v α,δ )2 ≤ tr (A(x)D2 v α,δ D2 v α,δ )tr A(x)
( )
≤ Ctr A(x)D2 v α,δ D2 v α,δ = Caik vxα,δ v α,δ .
i xj xk xj
(1.19)
In light of (1.19), for some c0 > 0 sufficiently small,
(( )2 )
1 ij α,δ α,δ
a vxi xk vxj xk + α|D v | ≥ 4c0
2 α,δ 2 ij α,δ
a vxi xj + (α∆v ) α,δ 2
2
( )2 ( α,δ )
α,δ 2
≥ 2c0 aij vxα,δ
x
i j
+ α∆v α,δ
= 2c 0 δv + H(x, Dv )
≥ c0 H(x, Dv α,δ )2 − C. (1.20)
Combining (1.16), (1.17) and (1.20) to achieve that
Dx H · Dv α,δ − C|Dv α,δ |2 + c0 H(x, Dv α,δ )2 ≤ C.
We then use (1.13) in the above to get the existence of a constant C > 0 independent
of α, δ so that |Dv α,δ (x0 )| ≤ C. Therefore, as in the proof of Theorem 1.1, setting
wα,δ (x) := v α,δ (x) − v α,δ (0), by passing some subsequences if necessary, we can send
α → 0 and δ → 0 in this order to yield that wα,δ and δv α,δ , respectively, converge
uniformly to v and −c in Tn . Clearly, (v, c) satisfies (1.8) in the viscosity sense.
1.2. EXISTENCE OF SOLUTIONS TO ERGODIC PROBLEMS 15
Remark 1.3. We give a simple proof of (1.18) here. By the Cauchy-Schwarz inequality,
we always have
For A, B ∈ Mn×n
sym with A ≥ 0, set a := A
1/2
and b := A1/2 B. Then,
Definition 1.1. For a pair (v, c) ∈ C(Tn ) × R solving one of the ergodic problems
(1.6)–(1.8), we call v and c an ergodic function and an ergodic constant, respectively.
We now proceed to show that an ergodic constant c is unique in all ergodic problems
(1.6)–(1.8). It is enough to consider general case (1.8).
Proposition 1.4. Assume that H ∈ C 2 (Tn × Rn ) and that (H1), (1.13) hold. Then
ergodic problem (1.8) admits the unique ergodic constant c ∈ R, which is uniquely
determined by A and H.
Proof. Suppose that there exist two pairs of solutions (v1 , c1 ), (v2 , c2 ) ∈ Lip (Tn ) × R
to (1.8) with c1 6= c2 . We may assume that c1 < c2 without loss of generality. Note
that v1 − c1 t − M and v2 − c2 t + M are a subsolution and a supersolution to (1.3),
respectively, for a suitably large M > 0. By the comparison principle for (1.3), we get
v1 − c 1 t − M ≤ v2 − c 2 t + M in Tn × [0, ∞).
Thus, (c2 − c1 )t ≤ M 0 for some M 0 > 0 and all t ∈ (0, ∞), which yields a contradiction.
Remark 1.4. As shown in Proposition 1.4, an ergodic constant is unique but on the
other hand, ergodic functions are not unique in general. It is clear that, if v is an
ergodic function, then v + C is also an ergodic function for any C ∈ R. But even up
to additive constants, v is not unique. See Section 3.1.1.
The ergodic constant c is related to the effective Hamiltonian H in the homogeniza-
tion theory (see Lions, Papanicolaou, Varadhan [61]). In fact, c = H(0). In general,
for p ∈ Rn , H(p) is the ergodic constant to
( )
−tr A(x)D2 v + H(x, p + Dv) = H(p) in Tn .
It is known that there are (abstract) formulas for the ergodic constant.
Proposition 1.5. Assume that H ∈ C 2 (Tn × Rn ) and that (H1), (1.13) hold. The
ergodic constant of (1.8) can be represented by
{ ( }
c = inf a ∈ R : there exists a solution to − tr A(x)D2 w) + H(x, Dw) ≤ a in Tn .
Proof. Let us denote by d1 , d2 the first and the second formulas in statement of the
proposition, respectively.
It is clear that d1 ≤ c. Suppose that c > d1 . Then there exists a subsolution (va , a)
with a < c to (1.8). By using the same argument as that of the proof of Proposition 1.4,
we get (c − a)t ≤ M 0 for some M 0 > 0 and all t > 0, which implies the contradiction.
Therefore, c = d1 .
We proceed to prove the second part of the proposition. For any fixed φ ∈ C 1 (Tn ),
H(x, Dφ) ≤ sup H(x, Dφ(x)) in Tn in the classical sense,
x∈Tn
which implies that d1 ≤ supx∈Tn H(x, Dφ(x)). Take infimum over φ ∈ C 1 (Tn ) to yield
that d1 ≤ d2 .
Now let v ∈ Lip (Tn ) be a subsolution of (1.6). Take ρ to be a standard mollifier
and set ρε (·) = ε−n ρ(·/ε) for any ε > 0. Denote by v ε = ρε ∗ v. Then v ε ∈ C ∞ (Tn ). In
light of the convexity of H and Jensen’s inequality,
( ∫ )
ε
H(x, Dv (x)) = H x, ρε (y)Dv(x − y) dy
Tn
∫
≤ H(x, Dv(x − y))ρε (y) dy
Tn
∫
≤ H(x − y, Dv(x − y))ρε (y) dy + Cε ≤ c + Cε in Tn .
Tn
Thus, d2 ≤ supx∈Tn H(x, Dv ε ) ≤ c + Cε. Letting ε → 0 to get d2 ≤ c = d1 . The proof
is complete.
Remark 1.5. Concerning formula (1.21), it is important pointing out that the ap-
proximation using mollification to a given subsolution v plays an essential role. This
only works for first-order convex Hamilton–Jacobi equations as seen in the proof of
Proposition 1.5. If we consider first-order nonconvex Hamilton–Jacobi equations, then
a smooth way to approximate a subsolution is not known. In light of the first formula
of Proposition 1.5, we only have in case A ≡ 0 that
c= inf sup sup H(x, p),
φ∈Lip (Tn ) x∈Tn p∈D+ φ(x)
where we denote by D+ φ(x) the superdifferential of φ at x (see Section 4.2 for the
definition). An analog to (1.21) in the general degenerate viscous case is not known
yet even in the convex setting. See Section 3.4 for some further discussions.
In the end of this chapter, we give the results on the boundedness of solutions to
(1.14), (1.3), and the asymptotic speed of solutions to (1.3). These are straightforward
consequences of Theorem 1.3.
Proposition 1.6. Assume that H ∈ C 2 (Tn × Rn ) and that (H1), (1.13) hold. Let v δ
be the viscosity solution of (1.14) and c be the associated ergodic constant. Then, there
exists C > 0 independent of δ such that
c
vδ + ≤ C in Tn .
δ
1.2. EXISTENCE OF SOLUTIONS TO ERGODIC PROBLEMS 17
Proof. Let (v, c) be a solution of (1.8). Take a suitably large constant M > 0 so that
v − c/δ ± M are a subsolution and a supersolution of (1.14), respectively. In light of
the comparison principle for (1.14), we get
c c
v(x) − − M ≤ v δ (x) ≤ v(x) − + M for all x ∈ Tn ,
δ δ
which yields the conclusion.
Proposition 1.7. Assume that H ∈ C 2 (Tn × Rn ) and that (H1), (1.13) hold. Let u
be the viscosity solution of (1.3) with the given initial data u0 ∈ Lip (Tn ), and c be the
associated ergodic constant. Then,
u + ct is bounded, and
u(x, t) → −c uniformly on Tn as t → ∞.
t
Proof. Let (v, c) be a solution of (1.8). Take a suitably large constant M > 0 so that
v − ct − M , v − ct + M are a subsolution and a supersolution of (1.3), respectively. In
light of the comparison principle for (1.3), we get
Remark 1.6. A priori estimate (1.22) is the reason why we do not need to consider the
terms like bi (x)ti for i ≥ 2 in the formal asymptotic expansion of u in the introduction
of this chapter.
We also give here a result on the Lipschitz continuity of solutions to (1.3).
Proposition 1.8. Assume that H ∈ C 2 (Tn × Rn ) and that (H1), (1.13) hold. Assume
further that u0 ∈ C 2 (Tn ). Then the solution u to (1.3) is globally Lipschitz continuous
on Tn × [0, ∞), i.e.,
kut kL∞ (Tn ×(0,∞)) + kDukL∞ (Tn ×(0,∞)) ≤ M for some M > 0.
Therefore, |ut | ≤ M . By using the same method as that of the proof of Theorem 1.3,
we get |Du| ≤ M 0 for some M 0 > 0.
18 CHAPTER 1. ERGODIC PROBLEMS FOR HJ EQUATIONS
As a corollary of Propositions 1.7, 1.8, we can easily get that there exists a subse-
quence {tj }j∈N with tj → ∞ as j → ∞ such that
We will first discuss this setting in Section 2.2. In this setting, as the Hamiltonian
has a simple structure, we are able to find an explicit subset of Tn which has the
monotonicity of solutions and the property of the uniqueness set. Therefore, we can
relatively easily get a convergence result of the type (1.5), that is,
where u is the solution of the initial value problem and (v, c) is a solution to the
associated ergodic problem.
Fathi then gave a breakthrough in [34] by using a dynamical approach from the weak
KAM theory. Contrary to [72], the results of [34] use uniform convexity and smoothness
assumptions on the Hamiltonian but do not require any structural conditions as above.
These rely on a deep understanding of the dynamical structure of the solutions and of
the corresponding ergodic problem. See also the paper of Fathi and Siconolfi [35] for
a characterization of the Aubry set, which will be studied in Section 2.5. Afterwards,
Davini and Siconolfi in [25] and Ishii in [49] refined and generalized the approach of
Fathi, and studied the asymptotic problem for Hamilton–Jacobi equations on Tn and
on the whole n-dimensional Euclidean space, respectively.
Besides, Barles and Souganidis [12] obtained additional results, for possibly non-
convex Hamiltonians, by using a PDE method in the context of viscosity solutions.
19
20 CHAPTER 2. LARGE TIME ASYMPTOTICS
Barles, Ishii and Mitake [8] simplified the ideas in [12] and presented the most general
assumptions (up to now).
In general, these methods are based crucially on delicate stability results of ex-
tremal curves in the context of the dynamical approach in light of the finite speed of
propagation, and of solutions for large time in the context of the PDE approach.
In the uniformly parabolic setting, Barles and Souganidis [13] proved the large-time
convergence of solutions. Their proof relies on a completely distinct set of ideas from
the ones used in the first-order case as the associated ergodic problem has a simpler
structure. Indeed, since the strong maximum principle holds, the ergodic problem has
a unique solution up to additive constants. The proof for the large-time convergence
in [13] strongly depends on this fact. We will discuss this in Section 2.6.
It is clear that all the methods aforementioned are not applicable for the general
degenerate viscous cases, which will be described in details in Section 2.4, because of the
presence of the second order terms and the lack of both the finite speed of propagation
as well as the strong comparison principle. Under these backgrounds, the authors
with Cagnetti, Gomes [17] introduced a new method for the large-time behavior for
general viscous Hamilton–Jacobi equations (1.3). In this method, the nonlinear adjoint
method, which was introduced by Evans in [32], plays the essential role. In Section
2.3, we introduce this nonlinear adjoint method.
where V ∈ C(Tn ) is a given function. See Example 4.2 in Appendix. In this case, since
the structure of the Hamiltonian is simple, we can easily prove (1.5), that is,
This was first done by Namah and Roquejoffre in [72]. Firstly, let us find the ergodic
constant c in this case. By Proposition 1.5, we have
( )
|Dφ(x)|2
c = inf sup + V (x) ≥ sup V (x) = maxn V (x).
φ∈C 1 (Tn ) x∈Tn 2 x∈Tn x∈T
Set
A := {x ∈ Tn : V (x) = maxn V (x)}.
x∈T
Then we observe, at least formally, that for x ∈ A
1
(uc )t = − |Duc |2 ≤ 0,
2
which implies the monotonicity of t 7→ uc (x, t) for x ∈ A. Thus, we get
lim inf ∗ uc (x, t) = lim sup ∗ uc (x, t) for all x ∈ A,
t→∞ t→∞
We call these limits lim inf ∗ and lim sup ∗ the half-relaxed limits. In view of the stability
t→∞ t→∞
result of viscosity solutions (see Theorem 4.11 in Section 4.5), lim sup ∗ uc and lim inf ∗ uc
t→∞ t→∞
are a subsolution and a supersolution of (2.1), respectively. Also notice that the set A
is a uniqueness set of ergodic problem (2.1) (see Theorem 2.15 in Section 2.5), that is,
for v1 , v2 are a subsolution and a supersolution of (2.1), respectively,
if v1 ≤ v2 on A, then v1 ≤ v2 on Tn .
This is an extremely important fact, and we will come back to it later. In light of this
fact, we get lim sup ∗ uc ≤ lim inf ∗ uc on Tn , and thus,
t→∞ t→∞
where h ∈ C(Tn ) with h(x) > 0 for all x ∈ Tn is a given function. See Section 4.1.1 in
Appendix. We obtain the ergodic constant c first as follows:
√ c2 − h(x)2
h(x) 1 + |Dv|2 = c ⇐⇒ |Dv|2 = . (2.3)
h(x)2
Thus, we get √
c= maxn h(x)2 = maxn h(x). (2.4)
x∈T x∈T
Thus, we can see that A is a uniqueness set for (2.3) as in Section 2.2.1. The large
time behavior result follows in a similar manner.
Moreover, we have the following proposition.
Proposition 2.1. If the initial data u0 is a subsolution of (2.3), then u(x, t)+ct = u0 (x)
for all (x, t) ∈ A × [0, ∞). In particular,
where f (x) := 2 min{|x − 1/4|, |x − 3/4|} for all x ∈ [0, 1]. See Figure 2.1 for the graph
of h. Consider u0 ≡ 0 on T. Since u0 is a subsolution to (2.3), in light of Proposition
2.1, we obtain
{ }
1 3
lim (u(x, t) + ct) = u0 (x) = 0 for x ∈ A = , ,
t→∞ 4 4
which is enough to characterize the limit. See Figure 2.1. We will give further discus-
sions on this example in Section 2.5.
Can we expect such a monotonicity in the general setting? The answer is NO. For
instance, if we consider the Hamilton–Jacobi equation:
1
ut + |Du − b(x)|2 = |b(x)|2 in Tn × (0, ∞),
2
where b : Tn → Rn is a given smooth vector field, then we cannot find such an easy
structure of solutions. Therefore, we need more profound arguments to prove (1.5) in
the general case.
Theorem 2.2. Assume that (H2)–(H4) hold. Let u be the solution of (1.1) with a
given initial data u0 ∈ Lip (Tn ). Then there exists (v, c) ∈ Lip (Tn ) × R, a solution of
ergodic problem (1.6), such that (1.5) holds, that is,
is the solution of the above but convergence (1.5) does not hold. This was first pointed
out by Barles and Souganidis in [12].
We also point out that the convexity is NOT a necessary condition either, since it
is known that there are convergence results for possibly nonconvex Hamilton–Jacobi
equations in [12, 8]. A typical example for nonconvex Hamiltonians is H(x, p) :=
(|p|2 − 1)2 − V (x).
where c is the ergodic constant of (1.6). We call this an asymptotic monotone property
of the solution to (1.1). This is a much stronger result than that of Proposition 1.7.
We “assume” that u is smooth below in the derivation. Notice that this is a completely
formal assumption as we cannot expect a global smooth solution u of Hamilton–Jacobi
equations in general.
Let us first fix T > 0. We consider the adjoint equation of the linearized operator
of the Hamilton–Jacobi equation:
{ ( )
−σt − div Dp H(x, Du(x, t))σ = 0 in Tn × (0, T )
(2.8)
σ(x, T ) = δx0 (x) on Tn ,
2.3. FIRST-ORDER CASE WITH GENERAL HAMILTONIANS 25
where δx0 is the Dirac delta measure at a fixed point x0 ∈ Tn . Note that although (2.8)
may have only a very singular solution, we do not mind in this section as this is just a
formal argument. It is clear that
∫
σ(x, t) ≥ 0 and σ(x, t) dx = 1 for all (x, t) ∈ Tn × [0, T ]. (2.9)
Tn
Noting (2.9) and |Du(x, t)| ≤ C by Proposition 1.8, in light of the Riesz theorem,
there exists νT ∈ P(Tn × Rn ) such that
∫∫ ∫ T ∫
1
ϕ(x, p) dνT (x, p) = ϕ(x, Du)σ dxdt
Tn ×Rn T 0 Tn
for all ϕ ∈ Cc (Tn ×Rn ). Here P(Tn ×Rn ) is the set of all Radon probability measures on
Tn × Rn . Because of the gradient bound of u, we obtain that supp (νT ) ⊂ Tn × B(0, C),
where supp (νT ) denotes the support of νT , that is,
{ ( ) }
supp (νT ) = (x, p) ∈ Tn × Rn : νT B((x, p), r) > 0 for all r > 0 .
Since ∫∫
dνT (x, p) = 1,
Tn ×Rn
We do not give the proofs of these facts here. We will give the definition of Mather
measures in Chapter 3. Property (ii) in the above is called the graph theorem in the
Hamiltonian dynamics, which is an extremely important result (see [63, 62] for details).
One way to look at (i) is the following: if we think of Du as a given function in (2.8),
then (2.8) is a transport equation, and the characteristic ODE is given by
{ )
Ẋ(t) = Dp H(X(t), Du(X(t), t) for t ∈ (0, T )
(2.11)
X(T ) = x0 ,
which is formally equivalent to the Hamiltonian system.
If we admit these, then we obtain
∫ ∫
1 Tj
ut (x0 , Tj ) = − H(x, Du)σ dxdt
Tj 0 Tn
∫∫ ∫∫
=− H(x, p) dνTj (x, p) → − H(x, p) dν(x, p) = −c
Tn ×Rn Tn ×Rn
(ii) to introduce a scaling process for t as we need to look at both limits of a regu-
larizing process and the large-time behavior, and
{
εuεt + H(x, Duε ) = 0 in Tn × (0, ∞),
(C)ε
uε (x, 0) = u0 (x) on Tn .
By repeating the proof of Proposition 1.8 with a small modification, we have the
following a priori estimates
kuεt kL∞ (Tn ×[0,1]) ≤ C/ε, kDuε kL∞ (Tn ×[0,1]) ≤ C. (2.12)
2.3. FIRST-ORDER CASE WITH GENERAL HAMILTONIANS 27
for some constant C > 0 independent of ε. Notice that in general, the function uε is
only Lipschitz continuous.
For this reason, we add a viscosity term to (C)ε , and consider the following regu-
larized equation
{
εwtε + H(x, Dwε ) = ε4 ∆wε in Tn × (0, ∞),
(A)ε
wε (x, 0) = u0 (x) on Tn .
(E)ε H(x, Dv ε ) = ε4 ∆v ε + H ε in Tn .
By Theorem 1.2 and Proposition 1.4, the existence and uniqueness of the ergodic
constant H ε of (E)ε holds. Besides, there exists a smooth solution v ε to (E)ε .
The advantage of considering (A)ε and (E)ε lies in the fact that the solutions of
these equations are smooth, and this will allow us to use the nonlinear adjoint method
to perform rigorous calculations in the next subsection.
Proposition 2.3. Assume that (1.11), (H2) and (H4) hold, and the ergodic constant
of (1.6) is 0. Let uε and wε be the solution of (C)ε and (A)ε with a given initial data
u0 ∈ Lip (Tn ), respectively. There exists C > 0 independent of ε such that
|x − y|2
Φ(x, y, t) := uε (x, t) − wε (y, t) − − Kt
2η
for η > 0 and K > 0 to be fixed later. Pick (xη , yη , tη ) ∈ Tn × Tn × [0, 1] such that
Φ(xη , yη , tη ) = max Φ.
x,y∈Tn , t∈[0,1]
In the case tη > 0, in light of Ishii’s lemma (see [23, Theorem 3.2.19]), for any
2,+ 2,−
ρ ∈ (0, 1), there exist (aη , pη , Xη ) ∈ J uε (xη , tη ) and (bη , pη , Yη ) ∈ J wε (yη , tη ) such
that
( )
xη − yη Xη 0
aη − bη = K, pη = , ≤ Aη + ρA2η , (2.13)
η 0 −Yη
where ( )
1 In −In
Aη := .
η −In In
2,±
Here, J denotes the super-semijet, and sub-semijet, respectively (see Section 4.2).
28 CHAPTER 2. LARGE TIME ASYMPTOTICS
We need to be careful for the case tη = 1, which is handled by Lemma 4.4. By the
definition of viscosity solutions,
which implies
εK + H(xη , pη ) − H(yη , pη ) ≤ −ε4 tr (Yη ).
Note that
∑
n
{ }
−ε tr (Yη ) =
4
hXη 0ei , 0ei i − hYη ε2 ei , ε2 ei i
i=1
∑n { ( ) ( ) ( ) ( )}
0ei 0ei 0ei 0ei
≤ Aη · 2
+ ρAη ·
ε2 e i ε2 ei ε2 ei ε2 e i
i=1
4
Cε
≤ + O(ρ).
η
Since Φ(yη , yη , tη ) ≤ Φ(xη , yη , tη ), we have
|xη − yη |2
uε (yη , tη ) − wε (yη , tη ) − Ktη ≤ uε (xη , tη ) − wε (yη , tη ) − − Ktη ,
2η
which implies |pη | ≤ C for some C > 0 in view of the Lipschitz continuity of uε . Thus,
|xη − yη | ≤ Cη. Therefore,
εK ≤ Cε4 /η + Cη + O(ρ) as ρ → 0.
tη = 0.
Setting
η := ε2 ,
2.3. FIRST-ORDER CASE WITH GENERAL HAMILTONIANS 29
we get uε (x, 1) − wε (x, 1) ≤ Cε2 for all x ∈ Tn . By exchanging the role of uε and wε
in Φ and repeating a similar argument, we obtain kuε (·, 1) − wε (·, 1)kL∞ (Tn ) ≤ Cε2 .
Let us now prove
H ε = H ε − c ≤ Cε2
in a similar way. Set
|x − y|2
Ψ(x, y) := v(x) − v ε (y) − ,
2η
where v ε and v are solutions to (E)ε and (1.6), respectively. For a maximum point
(xη , yη ) of Ψ on Tn × Tn , we have
Remark 2.2. As seen in the proof, the vanishing viscosity method gives that the rate
of convergence of uε − wε is
√
viscosity coefficient/(the coefficient of uεt and wtε ).
Because of this fact, we can choose εα for any α > 2 as a coefficient of the viscosity
terms in (A)ε and (E)ε . We choose α = 4 here just to make the computations nice and
clear.
Proof. We have that σ ε > 0 in Tn × (0, 1) by the strong maximum principle for (AJ)ε .
Since ∫ ∫
d ( )
ε ε
σ (·, t) dx = −div(Dp H(x, Dwε )σ ε ) − ε4 ∆σ ε dx = 0,
dt Tn Tn
we conclude ∫ ∫ ∫
ε ε
σ (x, t) dx = σ (x, 1) dx = δx0 dx = 1
Tn Tn Tn
Lemma 2.5 (Conservation of Energy 1). Assume that (H2)–(H4) hold. Then we have
the following
∫ properties
d
(i) (H(x, Dwε ) − ε4 ∆wε )σ ε dx = 0,
dt Tn ∫ ∫ 1
(ii) εwtε (x0 , 1) = − (H(x, Dwε ) − ε4 ∆wε )σ ε dx dt.
0 Tn
Proof. We only need to prove (i) as (ii) follows directly from (i). This is a straightfor-
ward result of adjoint operators and comes from a direct calculation:
∫
d
(H(x, Dwε ) − ε4 ∆wε )σ ε dx
dt Tn
∫ ∫
= (Dp H(x, Dw ) · Dwt − ε ∆wt )σ dx +
ε ε 4 ε ε
(H(x, Dwε ) − ε4 ∆wε )σtε dx
T∫n Tn
∫
( ( ) )
=− div Dp H(x, Dw )σ + ε ∆σ wt dx −
ε ε 4 ε ε
εwtε σtε dx = 0.
Tn Tn
Remark 2.3.
(i) We stress the fact that identity (ii) in Lemma 2.5 is extremely important. If we
scale back the time, the integral on the right hand side becomes
∫ ∫
1 T [ ]
− H(x, Dwε ) − ε4 ∆wε σ ε (x, t) dx dt,
T 0 Tn
where T = 1/ε → ∞. This is the averaging action as t → ∞, which is a key observation.
We observed this in a formal calculation in Section 2.3.1.
(ii) We emphasize here that we do not use any specific structure of the equations up
to now, and therefore this conservation law holds in a much more general setting. To
analyze further, we need to require more specific structures and perform some delicate
analysis. But it is worth mentioning that, in this reason, this method for the large-time
asymptotics for nonlinear equations is universal and robust in principle.
The following theorem is a rigorous interpretation of asymptotic monotone property
(2.7) of the solution to (1.1), which is essential in the proof of Theorem 2.2.
Theorem 2.6. Assume that (H2)–(H4) hold, and the ergodic constant of (1.6) is 0.
We have
lim εkwtε (·, 1)kL∞ (Tn ) = 0.
ε→0
2.3. FIRST-ORDER CASE WITH GENERAL HAMILTONIANS 31
εkwtε (·, 1)kL∞ (Tn ) = kH(·, Dwε (·, 1)) − ε4 ∆wε (·, 1)kL∞ (Tn ) ≤ Cε1/2 .
To prove this, we use the following key estimates, which will be proved in the next
subsection.
Lemma 2.7 (Key Estimates 1). Assume that (H2)–(H4) hold, and the ergodic constant
of (1.6) is 0. There exists a positive constant C, independent of ε, such that the
followings
∫ ∫ hold:
1
(i) |D(wε − v ε )|2 σ ε dx dt ≤ Cε,
∫0 1 ∫Tn
(ii) |D2 (wε − v ε )|2 σ ε dx dt ≤ Cε−7 .
0 Tn
We now give the proof of Theorem 2.6 by using the averaging action above and the
key estimates in Lemma 2.7.
Proof of Theorem 2.6. Let us first choose x0 , which may depend on ε, such that
|εwtε (x0 , 1)| = kεwtε (·, 1)kL∞ (Tn ) = kH(·, Dwε (·, 1)) − ε4 ∆wε (·, 1)kL∞ (Tn ) .
[ ]
≤C |D(wε − v ε )| + ε4 |D2 (wε − v ε )| σ ε dx dt + Cε2 .
0 Tn
We finally use the Hölder inequality and Lemma 2.7 to get that
≤ Cε 1/2
.
32 CHAPTER 2. LARGE TIME ASYMPTOTICS
Let us now present the proof of the large time asymptotics of u, Theorem 2.2.
Proof of Theorem 2.2. Firstly, the equi-Lipschitz continuity of {wε (·, 1)}ε>0 is obtained
by an argument similar to that of the proof of Theorem 1.2. Therefore, we are able to
choose a sequence εm → 0 as m → ∞ such that {wεm (·, 1)}m∈N converges uniformly
to a continuous function v in Tn , which may depend on the choice of {εm }m∈N . We let
tm := 1/εm for m ∈ N, and use Proposition 2.3 to deduce that
ku(·, tm ) − vkL∞ (Tn ) → 0 as m → ∞.
Let us show that the limit of u(·, t) as t → ∞ does not depend on the sequence
{tm }m∈N . In view of Theorem 2.6, which is one of our main results in this chapter,
v is a solution of (E), and thus a (time independent) solution of the equation in (C).
Therefore, for any x ∈ Tn , and t > 0 such that tm ≤ t < tm+1 , we use the comparison
principle for (C) to yield that
|u(x, t) − v(x)| ≤ ku(·, tm + (t − tm )) − v(·)kL∞ (Tn ) ≤ ku(·, tm ) − v(·)kL∞ (Tn ) .
Thus,
lim ku(·, t) − v(·)kL∞ (Tn ) ≤ lim ku(·, tm ) − v(·)kL∞ (Tn ) = 0,
t→∞ m→∞
which gives the conclusion.
This is one of the key estimates which was first introduced by Evans [32] in the
study of gradient shock structures of the vanishing viscosity procedure of nonconvex,
first-order Hamilton–Jacobi equations. See also Tran [78]. The convexity of H is not
needed at all to get the conclusion of this lemma as can be seen in the proof.
Proof. By a computation similar to that in the proof of Theorem 1.2, for ϕ(x, t) :=
|Dwε |2 /2, we have
εϕt + Dp H · Dϕ + Dx H · Dwε = ε4 (∆ϕ − |D2 wε |2 ).
Multiply the above by σ ε and integrate over Tn × [0, 1] to yield
∫ 1∫ ∫ 1∫
( )
ε 4
|D w | σ dxdt = −
2 ε 2 ε
εϕt + Dp H · Dϕ − ε4 ∆ϕ σ ε dxdt
Tn
∫0 1 ∫T
0 n
− Dx H · Dwε σ ε dxdt.
0 Tn
2.3. FIRST-ORDER CASE WITH GENERAL HAMILTONIANS 33
Proof of Lemma 2.7 (i). Subtracting equation (A)ε from (E)ε , thanks to the uniform
convexity of H, we get
≤ Hε − ε((v ε − wε )σ ε )t dxdt
∫ 1 ∫0 T
n
[ ε ]
+ εσt + div (Dp H(x, Dwε )σ ε ) + ε4 ∆σ ε (v ε − wε ) dxdt
0 Tn
[∫ ]t=1
= Hε + ε (w − v )σ dx
ε ε ε
Tn t=0
∫
= H ε + ε(w (x0 , 1) − v (x0 )) − ε
ε ε
(u0 (x) − v ε (x))σ ε (x, 0) dx
T
∫ ∫
n
= H ε + εw (x0 , 1) − ε
ε
(v (x0 ) − v (x))σ (x, 0) dx − ε
ε ε ε
u0 (x)σ ε (x, 0) dx.
Tn Tn
Proof of Lemma 2.7 (ii). Subtract (A)ε from (E)ε and differentiate with respect to xi
to get
Let ϕ(x, t) := |D(v ε −wε )|2 /2. Multiplying the last identity by (v ε −wε )xi and summing
up with respect to i, we achieve that
[( ) ]
εϕt + Dp H(x, Dw ) · Dϕ + Dp H(x, Dv ) − Dp H(x, Dw ) · Dvxi (vxε i − wxε i )
ε ε ε ε
( ) ( )
+ Dx H(x, Dv ε ) − Dx H(x, Dwε ) · D(v ε − wε ) − ε4 ∆ϕ − |D2 (v ε − wε )|2 = 0.
The right hand side of (2.14) is a dangerous term because of the term |D2 v ε |. We now
take advantage of Lemma 2.8 to handle it. Using the fact that kDv ε kL∞ and kDwε kL∞
are bounded, we have
ε4 2 ε
εϕt + Dp H(x, Dwε ) · Dϕ − ε4 ∆ϕ + |D (v − wε )|2
2
C
≤ C|D(v ε − wε )|2 + 4 |D(v ε − wε )|2 + C|D2 wε |. (2.16)
ε
We multiply (2.16) by σ ε , integrate over Tn × [0, 1], and use integration by parts to
yield that, in light of Lemma 2.8 and (i),
∫ 1 ∫ ∫ 1∫
C
ε 4
|D (w − v )| σ dx dt ≤ Cε + 4 ε + C
2 ε ε 2 ε
|D2 wε |σ ε dx dt
0 Tn ε 0 Tn
( ∫ 1∫ ) ( ∫ 1∫ )
C 1/2 1/2 C C C
≤ 3 +C |D2 wε |2 σ ε dx dt σ ε dx dt ≤ 3 + 2 ≤ 3.
ε 0 Tn 0 Tn ε ε ε
Remark 2.4. The estimates in Lemma 2.7 give us much better control of D(wε − v ε )
and D2 (wε −v ε ) on the support of σ ε . More precisely, the classical a priori estimates by
using the Bernstein method as in the proof of Theorem 1.2 only imply that D(wε − v ε )
and ε4 ∆(wε − v ε ) are bounded.
2.4. DEGENERATE VISCOUS CASE 35
By using the adjoint equation, we can get further formally that ε−1/2 D(wε − v ε )
and ε7/2 D2 (wε −v ε ) are bounded on the support of σ ε . Clearly, these new estimates are
much stronger than the known ones on the support of σ ε . However, we must point out
that, as ε → 0, the supports of subsequential limits of {σ ε }ε>0 could be very singular.
Understanding deeper about this point is essential in achieving further developments
of this new approach in the near future.
It is also worth mentioning that we eventually do not need to use the graph theorem
in the whole procedure above.
The proof of this lemma is similar to that of Lemma 2.5, hence is omitted.
Now, as in the proof of Theorem 2.6, we have
εkwtε (·, 1)kL∞ (T) = kH(·, wxε (·, 1)) − (ε4 + a(x))wxx ε
(·, 1)kL∞ (T)
∫ 1∫
[ ] ε
= H(x, wxε ) − (ε4 + a(x))wxx ε
σ dx dt
0 T
∫ 1∫
[ ]
= H(x, wxε ) − (ε4 + a(x))wxx ε
− (H(x, vxε ) − (ε4 + a(x))vxxε
− H ε ) σ ε dx dt
0 T
∫ 1∫
( )
≤ |H(x, wxε ) − H(x, vxε )| + |(ε4 + a(x))(wε − v ε )xx | σ ε dx dt + H ε
0 T
[ (∫ 1 ∫ )1/2 (∫ 1 ∫ )1/2
≤C |(w − v )x | σ dx dt
ε ε 2 ε
+ε 4
|(w − v )xx | σ dx dt
ε ε 2 ε
0 T 0 T
(∫ 1 ∫ )1/2 ]
+ a(x) |(w − v )xx | σ dx dt
2 ε ε 2 ε
+ Hε ,
0 T
Lemma 2.11 (Key Estimates 2). Assume that (H1)–(H4) hold, and the associated
ergodic
∫ ∫constant is zero. There exists a constant C > 0, independent of ε, such that
1
(i) |(wε − v ε )x |2 σ ε dx dt ≤ Cε,
∫0 1 ∫T
(ii) (a(x) + ε4 )|wxx | σ dx dt ≤ C,
ε 2 ε
∫0 1 ∫T
(iii) |(wε − v ε )xx |2 σ ε dx dt ≤ Cε−7 ,
∫0 1 ∫T
√
(iv) a2 (x)|(wε − v ε )xx |2 σ ε dx dt ≤ C ε.
0 T
Proof. The proof of (i) is similar to that of Lemma 2.7 (i), hence is omitted. Notice that
if we do not differentiate the equation, then we do not have any difficulty which comes
from the diffusion term a(x). On the other hand, once we differentiate the equation
to obtain some estimates, then we face some of difficulties coming from the term a as
seen below.
We now prove (ii). Let wε be the solution of (A)ε . Differentiate (A)ε with respect
to the x variable to get
ε
εwtx + Hp (x, wxε ) · wxx
ε
+ Hx (x, wxε ) − (ε4 + a)wxxx
ε
− ax wxx
ε
= 0. (2.18)
2.4. DEGENERATE VISCOUS CASE 37
ξt = wxε wxt
ε
, ξx = wxε wxx
ε
, ξxx = |wxx
ε 2
| + wxε wxxx
ε
.
We need to be careful for the last term which comes from the diffusion term. Notice
first that we have
a2x (x) ≤ Ca(x) for all x ∈ T (2.19)
√ √ √
since a ∈ C 2 (T). Indeed, a ∈ C 2 (T) implies a ∈ Lip (T). Thus, |ax | = |2( a)x a| ≤
√
C a. We next notice that for δ > 0 small enough,
C 1
ax wxε wxx
ε
≤ C|ax ||wxx
ε
|≤ + δa2x |wxx
ε 2
| ≤ C + a|wxx
ε 2
|. (2.20)
δ 2
Hence,
a ε 2
εξt + Hp · ξx − (ε4 + a)ξxx + (ε4 + )|wxx | ≤ C.
2
Multiply the above by σ ε , integrate over T × [0, 1], and use integration by parts to yield
the conclusion of (ii).
Next, we prove (iii). Subtract (A)ε from (E)ε and differentiate with respect to the
variable x to get
( )
+ Hx (x, vxε ) − Hx (x, wxε ) · (v ε − wε )x + (ε4 + a(x))(|(v ε − wε )xx |2 − ϕxx )
− [ax · (v ε − wε )xx ] (v ε − wε )x = 0.
a(x)2 ε
εψt + Hp (x, wxε ) · ψx − (ε4 + a(x))ψxx + |(v − wε )xx |2
4
≤ (C + Cε−1/2 )|(v ε − wε )x |2 + ε1/2 a(x)|wxx
ε 2
|.
We multiply the above inequality by σ ε , integrate over T × [0, 1] and use (i), (ii) to
yield (iv).
Thanks to Lemmas 2.10, 2.11, we obtain
Theorem 2.12. Assume that (H1)–(H4) hold, and the associated ergodic constant is
zero. Let wε be the solution of (A)ε with initial data u(·, 0) = u0 ∈ Lip (T). Then,
u∞
c [u0 ](x) := lim (u(x, t) + ct) ,
t→∞
where u is the solution to (1.1) and c is the ergodic constant of (1.6). Due to Theorem
2.2, this limit exists. As we have already emphasized many times, because of the
multiplicity of solutions to (1.6), the asymptotic profile v in Theorem 2.2 is completely
decided through the initial data for H fixed. In this section, we try to make clear how
the asymptotic profile depends on the initial data, which is based on the argument by
Davini, Siconolfi [25]. We use the following assumption
40 CHAPTER 2. LARGE TIME ASYMPTOTICS
dc (x, y)
{∫ t }
:= inf Lc (γ(s), −γ̇(s)) ds : t > 0, γ ∈ AC ([0, t], T ), γ(0) = x, γ(t) = y .
n
0
(2.23)
The function dc plays a role of a fundamental solution for Hamilton–Jacobi equations.
We gather some basic properties of the function dc .
Proposition 2.13. Assume that (H3)’ holds. We have
(i) dc (x, y) = sup{v(x) − v(y) : v is a subsolution of (1.6)},
(ii) dc (x, x) = 0 and dc (x, y) ≤ dc (x, z) + dc (z, y) for any x, y, z ∈ Tn ,
(iii) dc (·, y) is a subsolution of (1.6) for all y ∈ Tn and a solution of (1.6) in Tn \ {y}
for all y ∈ Tn .
Proof. We first prove ∫ t
v(x) − v(y) ≤ Lc (γ, −γ̇) ds
0
for all x, y ∈ Tn , any subsolution v of (1.6), and γ ∈ AC ([0, t], Tn ) with γ(0) = x and
γ(t) = y. This is at least formally easy to prove. Indeed, let v be a smooth subsolution
of (1.6). We have the following simple computations
∫ t ∫ t
dv(γ(s))
v(x) − v(y) = − ds = Dv(γ(s)) · (−γ̇(s)) ds
ds
∫ t 0 0
∫ t
≤ (L(γ(s), −γ̇(s)) + c) + (H(γ(s), Dv(γ(s))) − c) ds ≤ Lc (γ(s), −γ̇(s)) ds.
0 0
2.5. ASYMPTOTIC PROFILE OF THE FIRST-ORDER CASE 41
This immediately implies (i). Due to the convexity assumption on H, we can obtain an
approximated smooth subsolution by using mollification as in the proof of Proposition
1.5. By using this approximation, we can make this argument rigorous. We ask the
interested readers to fulfill the details here.
It is straightforward to check that (i) implies (ii), and (iii) is a consequence of (i)
and stability results of viscosity solutions (see Proposition 4.10 in Section 4.5) .
Remark 2.6. We can easily check that y is in A if and only if (2.22) holds only for
some δ0 > 0. Indeed, for any δ > 0, we only need to consider the case where δ > δ0 .
Fix ε > 0 and then there exist tε ≥ δ0 and γε ∈ AC ([0, tε ], Tn ) with γ(0) = γ(tε ) = y
such that ∫ t ε
Sending ε → 0 yields
{∫ t }
inf Lc (γ(s), −γ̇(s)) ds : t ≥ δ, γ ∈ AC ([0, t], T ), γ(0) = γ(t) = y = 0
n
0
Theorem 2.14. Assume that (H3)’ holds. A point y ∈ Tn is in the Aubry set A if and
only if dc (·, y) is a solution of (1.6).
We refer to [35, Proposition 5.8] and [49, Proposition A.3] for the proofs.
Theorem 2.15. Assume that (H3)’ holds. Then the Aubry set A is nonempty, com-
pact, and a uniqueness set of (1.6), that is, if v and w are solutions of (1.6), and v = w
on A, then v = w on Tn .
Proof. Let us first proceed to prove that A is a uniqueness set of (1.6). It is enough to
show that if v ≤ w on A, then v ≤ w on Tn . For any small ε > 0, there exists an open
set Uε such that A ⊂ Uε with ∩ε>0 Uε = A, and v ≤ w + ε in Uε . Set Kε := Tn \ Uε .
Fix any z ∈ Kε . Since z 6∈ A, dc (·, z) is not a supersolution at x = z in light of
42 CHAPTER 2. LARGE TIME ASYMPTOTICS
Proposition 2.13 (iii) and Theorem 2.14. Then, there exist a constant rz > 0 and a
function ϕz ∈ C 1 (Tn ) such that B(z, rz ) ⊂ Tn \ A,
H(x, Dϕz (x)) < 0 for all x ∈ B(z, rz ),
ϕz (z) > 0 = dc (z, z), and ϕz (x) < dc (x, z) for all x ∈ Tn \ B(z, rz ).
See the proof of Theorem 4.19 for details. We set ψz (x) = max{dc (x, z), ϕz (x)} for
x ∈ Tn and observe that ψz is a subsolution of (1.6) in light of Proposition 4.10, and
that H(x, Dψz (x)) < 0 in a neighborhood Vz of z in the classical sense.
By the compactness of Kε , there is a finite sequence of points {zj }Jj=1 such that
∪ ∑
Kε ⊂ Jj=1 Vzj . We define the function ψ ∈ C(Tn ) by ψ(x) = (1/J) Jj=1 ψzj (x) and
observe by convexity (H3)’ that ψ is a strict subsolution to (1.6) for some neighborhood
V of Kε . Regularizing ψ by mollification, if necessary, we may assume that ψ ∈ C 1 (V ).
Thus, we may apply the comparison result (see Theorem 4.8 in Section 4.4) to conclude
that v ≤ w + ε in Kε . Sending ε → 0 yields v ≤ w in Tn \ A, which implies the
conclusion.
To prove that A 6= ∅, suppose that for all y ∈ Tn , dc (·, y) is not a solution to
(1.6). By the above argument, for each z ∈ Tn , ψz is a subsolution of (1.6), and
H(x, Dψz (x)) < 0 in a neighborhood Vz of z in the classical sense. By the compactness
∪
of Tn , there is a finite sequence {yi }N ⊂ Tn such that Tn = N i=1 Vyi . We set w(x) :=
∑N i=1
(1/N ) i=1 ψyi (x) for all x ∈ T and δ := (1/N ) mini=1,...,N δi . By the convexity of
n
H(x, ·), we have H(x, Dw(x)) ≤ c − δ in Tn in the viscosity sense, which contradicts
the first formula of c in Proposition 1.5.
The compactness of A is a straightforward result of stability of viscosity solutions
(see Proposition 4.9 in Section 4.5).
Theorem 2.16. Assume that (H2)–(H4) hold. Let u∞ c [u0 ] − ct be the asymptotic so-
∞
lution for (1.1), that is, uc [u0 ](x) := limt→∞ (u(x, t) + ct), where u is the solution to
(1.1). Then we have, for all y ∈ A,
u∞
c [u0 ](y) = min {dc (y, z) + u0 (z) : z ∈ T }
n
(2.24)
= sup {v(y) : v is a subsolution to (1.6) with v ≤ u0 in T } . n
Proof. We write vu0 for the right hand side of (2.24). Let y ∈ A and choose zy ∈ Tn
so that
vu0 (y) = dc (y, zy ) + u0 (zy ).
By the definition of the function dc , for any ε > 0, there exists tε > 0 and a curve
ξε ∈ AC ([0, tε ], Tn ) with ξε (0) = y, ξε (tε ) = zy such that
∫ tε
dc (y, zy ) > Lc (ξε , −ξ˙ε ) ds − ε.
0
By the definition of the Aubry set, for any n ∈ N, there exists a sequence tn ≥ n and
a curve δε ∈ AC ([0, tn ], Tn ) such that δε (0) = δε (tn ) = y, and
∫ tn
Lc (δε (s), −δ̇ε (s)) ds < .
0
2.5. ASYMPTOTIC PROFILE OF THE FIRST-ORDER CASE 43
Define γε ∈ AC ([0, tn + tε ], Tn ) by
{
δε (s) for s ∈ [0, tn ],
γε (s) =
ξε (s − tn ) for s ∈ [tn , tn + tε ].
!
!
!
!
!"
! "
!
Figure 2.2
We observe that
∫ tε
vu0 (y) > Lc (ξε , −ξ˙ε ) ds + u0 (zy ) − ε
∫ tn
0
∫ tε
> Lc (δε (s), −δ̇ε (s)) ds + Lc (ξε , −ξ˙ε ) ds + u0 (zy ) − 2ε
∫ tn +tε
0 0
where uc (x, t) := u(x, t) + ct for (x, t) ∈ Tn × [0, ∞). Thus, sending n → ∞ and ε → 0
in this order yields vu0 (y) ≥ u∞ c (y).
By the definition of vu0 , we can easily check vu0 ≤ u0 on Tn in view of Proposition
2.13 (ii). Note that vu0 is a subsolution to (1.6) in view of Proposition 2.13 (iii)
and Corollary 4.16 (i). Thus, in light of the comparison principle for (1.1), we get
vu0 (x) − ct ≤ u(x, t) for all (x, t) ∈ Tn × [0, ∞). Thus, vu0 (x) ≤ limt→∞ (u(x, t) + ct) =
u∞c (x).
The second equality is a straightforward result of Proposition (2.13) with the ob-
servation vu0 ≤ u0 on Tn .
In light of Proposition 2.13, Theorems 2.14, 2.15 and 2.16, we get the following
representation formula for the asymptotic profile:
Corollary 2.17. Assume that (H2)–(H4) holds. Let u∞ c [u0 ] − ct be the asymptotic
solution for (1.1). Then we have the representation formula for the asymptotic profile
44 CHAPTER 2. LARGE TIME ASYMPTOTICS
u∞
c [u0 ] as
u∞
c [u0 ](x) = min {dc (x, y) + vu0 (y) : y ∈ A} (2.25)
= inf{v(x) : v is a solution to (1.6) with v ≥ vu0 in T }, n
where
vu0 (x) = min{dc (x, z) + u0 (z) : z ∈ Tn } for all x ∈ Tn .
Proof. We denote by wu0 the right hand side in (2.25). Note first that this is a solution
of (1.6) in view of Theorem 2.14 and Corollary 4.16. Moreover, we can check that
wu0 (x) = min{dc (x, y) + vu0 (y) : y ∈ A} = vu0 (x) for all x ∈ A
Example 2.2. Now, let us consider the asymptotic profile for the Hamilton–Jacobi
equation appearing in Example 4.1. As we observe in the beginning of Section 2.2, the
associated ergodic problem is
√
c2 − h(x)2
|Dv| = 2
in Tn ,
h(x)
where
c := maxn h(x).
x∈T
We can easily check that we have the explicit formula for the Aubry set
A := {x ∈ Tn : h(x) = max h}
n T
From this, we somehow have a better understanding on how the asymptotic profile
depends on the force term h and the initial data u0 through Corollary 2.17.
Example 2.3. We consider Example 2.2 in a more explicit setting which we discussed
in Example 2.1. Let n = 1 and h be the function given by (2.6). Our goal is to derive
the asymptotic profiles by using the formula given in Corollary 2.17 for some given
initial data u0 .
In this setting, we have A = {1/4, 3/4}. Thus, letting u∞
c [u0 ] := limt→∞ (u(x, t) +
ct), we obtain by Corollary 2.17,
{ ( ) ( ) ( ) ( )}
∞ 1 1 3 3
uc [u0 ](x) = min dc x, + vu0 , dc x, + vu 0 .
4 4 4 4
2.5. ASYMPTOTIC PROFILE OF THE FIRST-ORDER CASE 45
(ii) Boundary value problems: If we consider different types of optimal control prob-
lems (e.g., state constraint, exit-time problem, reflection problem, stopping time
problem), then we need to consider several types of boundary value problems for
Hamilton–Jacobi equations, which cause various kinds of difficulties. See Mitake
[64], Barles, Ishii, Mitake [7] for state constraint problems, Mitake [65], Tchamba
[77], Barles, Porretta, Tchamba [10] , Barles, Ishii, Mitake [7] for Dirichlet prob-
lems, Ishii [50], Barles, Mitake [9], Barles, Ishii, Mitake [7] for Neumann problems,
and Mitake, Tran [70] for obstacle problems.
(iii) Weakly coupled systems: If we consider an optimal control problem which appears
in the dynamic programming for the system whose states are governed by random
changes (jumps), then we can naturally derive the weakly coupled system of
Hamilton–Jacobi equations. See Cagnetti, Gomes, Mitake Tran [17], Mitake,
Tran [68, 69], Camilli, Ley, Loreti, Nguyen [19], Nguyen [73], Davini, Zavidovique
[27], Mitake, Siconolfi, Tran, Yamada [66] for developments on this direction. The
profile of asymptotic limits is not solved yet.
(iv) Degenerate viscous Hamilton–Jacobi equations: In addition to the works [17, 70],
we refer to Ley, Nguyen [58] for this direction. Also, not much is known about
the limiting profiles.
49
50 CHAPTER 3. VANISHING DISCOUNT PROBLEM
This shows that inviscid ergodic problem (3.2) has many solutions of different types,
which confirms that solutions of (1.6) are not unique even up to additive constants
in general. This example was known from the beginning of the theory of viscosity
solutions (see [60, Proposition 5.4]).
We now give two examples on the nonuniqueness issue for the degenerate viscous
case (ergodic problem (3.1)).
See Figure 3.3. Here, notice that we do not require much on the behavior of a in
(1/4, 3/4).
We identify the torus T as the interval [0, 1] here. The ergodic problem in this
setting is
|u0 |2 = V (x) + a(x)u00 + c in R, (3.4)
where u is 1-periodic.
52 CHAPTER 3. VANISHING DISCOUNT PROBLEM
Set ∫ x ( )2
1 1
y y−
2
dy for 0 ≤ x ≤ ,
4 4
0
1 3
u(x) := α for ≤ x ≤ ,
∫ x ( )2 4 4
3 3
α − y− (y − 1)2 dy for ≤ x ≤ 1,
3/4 4 4
where ∫ ( )2
1/4
1 1
α := y y−
2
dy = . (3.5)
0 4 30720
Extend u in a periodic way to R. It is not hard to see that ±u are solutions to (3.4)
with c = 0. Moreover, for β ≥ 0, define
Note that uβ (x) = min{α, −α + β} for x ∈ [1/4, 3/4]. By checking carefully, we see
that uβ is also a solution of (3.4) with c = 0.
This again demonstrates that ergodic problem for degenerate viscous Hamilton–
Jacobi equations (3.1) has many solutions of different types in general.
3.1. SELECTION PROBLEMS 53
Problem (D)ε is uniquely solvable as we see in Section 1.2 because of the term “εuε ”,
which is sometimes called a discount factor in optimal control theory.
The main assumptions in this chapter are:
(H5) H ∈ C 2 (Tn × Rn ), p 7→ H(x, p) is convex for each x ∈ Tn , and there exists C > 0
so that
kεuε kL∞ (Tn ) + kDuε kL∞ (Tn ) ≤ C for some C > 0. (3.8)
See the proof of Theorem 1.3. Once (3.8) is achieved, we obtain that
for some fixed x0 ∈ Tn . Therefore, in view of the Arzelà-Ascoli theorem, there exists a
subsequence {εj }j∈N with εj → 0 as j → ∞ such that
Remark 3.1. (i) It is worth emphasizing that all of the above results strongly require
the convexity of the Hamiltonians. On the other hand, to obtain a priori estimate
(3.8), we only need the superlinearity of H, and in particular, we do NOT need the
convexity assumption. Thus, the question whether convergence of uε + c/ε as ε → 0
without the convexity assumption holds or not remains. Indeed, selection problems
for Hamilton–Jacobi equations with nonconvex Hamiltonians remain rather open. See
Section 3.6 for some further discussions on more recent developments.
(ii) Note also that, in the above theorem, the first-order case and the second-order case
are quite different because of the appearance of the diffusion term, which is delicate
to be handled. In particular, E is a family of solutions of (3.1) (not just subsolutions),
which is different from that of [26]. We will address this matter clearly later.
Since uε , u are not smooth in general, in order to perform our analysis, we need a
regularizing process as in the previous chapter.
By (H5), L is well-defined, that is, L(x, v) is finite for each (x, v) ∈ Tn × Rn . Further-
more, L is of class C 1 , convex with respect to v, and superlinear.
For each ε, η > 0, we study
Lemma 3.2. Assume (H5), (H6). Then there exists a constant C > 0 independent of
ε and η so that
where δx0 denotes the Dirac delta measure at x0 . By the maximum principle and
integrating (AJ)ηε on Tn , we obtain
∫
θ > 0 in T \ {x0 }, and
ε,η n
θε,η (x) dx = 1.
Tn
In light of the Riesz theorem, for every ε, η > 0, there exists a probability measure
ν ∈ P(Tn × Rn ) satisfying
ε,η
∫ ∫∫
ε,η ε,η
ψ(x, Du )θ (x) dx = ψ(x, p) dν ε,η (x, p) (3.13)
Tn Tn ×Rn
for all ψ ∈ Cc (Tn × Rn ). It is clear that supp (ν ε,η ) ⊂ Tn × B(0, C) for some C > 0
due to Lemma 3.2. Since
∫∫
ψ(x, p) dν ε,η (x, p) = 1 for all ε > 0, η > 0,
Tn ×Rn
due to the compactness of weak convergence of measures, there exist two subsequences
ηk → 0 and εj → 0 as k → ∞, j → ∞, respectively, and probability measures
ν εj , ν ∈ P(Tn × Rn ) (see [30, Theorem 4] for instance) so that
ν εj ,ηk * ν εj as k → ∞,
(3.14)
ν εj * ν as j → ∞,
in the sense of measures. We also have that supp (ν εj ), supp (ν) ⊂ Tn × B(0, C). For
each such ν, set µ ∈ P(Tn × Rn ) so that the pushforward measure of µ associated with
Φ(x, v) = (x, Dv L(x, v)) is ν, that is, for all ψ ∈ Cc (Tn × Rn ),
∫∫ ∫∫
ψ(x, p) dν(x, p) = ψ(x, Dv L(x, v)) dµ(x, v). (3.15)
Tn ×Rn Tn ×Rn
the clear dependence. In general, there could be many such limit µ for different choices
of x0 , {ηk }k or {εj }j . We define the set M ⊂ P(Tn × Rn ) by
∪
M := µ(x0 , {ηk }k , {εj }j ).
x0 ∈Tn ,{ηk }k ,{εj }j
Proposition 3.3. Assume that (H5), (H6) hold and the ergodic constant of (3.1) is 0.
Let ν and µ be probability measures given by (3.14) and (3.15). Then,
∫∫ ∫∫
(i) (Dp H(x, p) · p − H(x, p)) dν(x, p) = L(x, v) dµ(x, v) = 0,
Tn ×Rn Tn ×Rn
∫∫
(ii) (Dp H(x, p) · Dϕ − a(x)∆ϕ) dν(x, p)
∫T∫
n ×Rn
( ε,η ( ))
= εθ − div (Dp H(x, Duε,η )θε,η ) − ∆ (a(x) + η 2 )θε,η uε,η dx
∫T
n
Moreover,
∫
(Dp H(x, Duε,η ) · Duε,η − H(x, Duε,η )) θε,η dx
Tn
∫∫
= (Dp H(x, p) · p − H(x, p)) dν ε,η (x, p).
Tn ×Rn
by (3.15) and the duality of convex functions. Note in the above computation that we
have limj→∞ εj uεj (x0 ) = 0 because of the assumption that c = 0.
We now proceed to prove the second part. Fix ϕ ∈ C 2 (Tn ). Multiply (AJ)ηε by ϕ
and integrate on Tn to get
∫
(Dp H(x, Duε,η ) · Dϕ − a(x)∆ϕ) θε,η dx
T
∫ ∫
n
=η 2
∆ϕθ dx + εϕ(x0 ) − ε
ε,η
ϕθε,η dx.
Tn Tn
where
{ ∫∫ }
F := µ ∈ P(T × R ) :
n n
(v · Dφ − a(x)∆φ) dµ(x, v) = 0 for all φ ∈ C (T ) .
2 n
Tn ×Rn
d
Dv L(γ(s), γ̇(s)) = Dx L(γ(s), γ̇(s)).
ds
This idea was first discovered by Mañé [62], who relaxed the original idea of Mather
[63]. Minimizers of the minimizing problem (3.16) are precisely Mather measures for
first-order Hamilton–Jacobi equations.
When a ≡ 1, this coincides with the definition of stochastic Mather measures
for viscous Hamilton–Jacobi equations given by Gomes [39]. This means that this
definition is quite natural for the current degenerate viscous case, and it covers both
the first-order and the viscous case. Gomes [40, 41] also introduced the notion of
generalized Mather measures by using the duality principle.
Lemma 3.5. Assume that (H5), (H6) hold and the ergodic constant of (3.1) is 0. We
have ∫∫
L(x, v) dµ(x, v) ≥ 0 for all µ ∈ F. (3.17)
Tn ×Rn
Furthermore, ∫∫
min L(x, v) dµ(x, v) = 0.
µ∈F Tn ×Rn
Since a solution w of ergodic problem (E) is not smooth in general, in order to use
the admissible condition in F, we need to find a family of smooth approximations of
w, which are approximate subsolutions to (E). A natural way to perform this task is
to use the usual convolution technique. More precisely, for each η > 0, let
∫
w (x) := γ ∗ w(x) =
η η
γ η (y)w(x + y) dy, (3.18)
Rn
where γ η (y) = η −n γ(η −1 y) (here γ ∈ Cc∞ (Rn ) is a standard symmetric mollifier such
that γ ≥ 0, supp γ ⊂ B(0, 1) and kγkL1 (Rn ) = 1).
In the first-order case, it is quite simple to show that {wη }η>0 indeed are approx-
imate subsolutions to (E) (see the second part of Proposition 1.5). In the current
degenerate viscous setting, it is much more complicated because of the appearance of
the possibly degenerate viscous term a(x)∆w. To prove that {wη }η>0 are approximate
subsolutions to (E), we need to be able to control the commutation term
Lemma 3.6 (A commutation lemma). Assume (H5), (H6) hold. Assume that w is
a viscosity solution of (E) and wη be the function defined by (3.18) for η > 0. There
exists a constant C > 0 and a continuous function S η : Tn → R such that
and
H(x, Dwη ) ≤ a(x)∆wη + S η (x) in Tn .
Lemma 3.7 (Uniform convergence). Assume (H5), (H6) hold. Then there exists a
universal constant C > 0 such that kS η kL∞ (Tn ) ≤ Cη 1/2 .
The proofs of Lemmas 3.6 and 3.7 are postponed to Section 3.4. By using the
commutation lemma, Lemma 3.6, we give a proof of Lemma 3.5.
Proof of Lemma 3.5 and Proposition 3.4. Let w be a solution of ergodic problem (E).
By Lemma 3.6, we have that
where we use the admissible condition of µ ∈ F to go from the second line to the last
line. Thanks to (3.19), we let η → 0 and use the Lebesgue dominated convergence
theorem to deduce that ∫∫
L(x, v) dµ(x, v) ≥ 0.
Tn ×Rn
Thus, item (i) in Proposition 3.3 confirms that any measure µ ∈ M minimizes the
f
action (3.16). This is equivalent to the fact that M ⊂ M.
Lemma 3.8. Assume that (H5), (H6) hold and the ergodic constant of (3.1) is 0. Let
w ∈ C(Tn ) be any solution of (E), and, for ε, η > 0, wη and θε,η be, respectively, the
function given by (3.18) and the solution to (AJ)ηε for some x0 ∈ Tn .
Then, ∫ ∫
Cη 1
u (x0 ) ≥ w (x0 ) −
ε,η η
w θ dx −
η ε,η
− S η θε,η dx, (3.20)
T n ε ε Tn
which immediately implies η 2 |∆wη | ≤ Cη. Combing this with Lemma 3.6, we see that
wη satisfies
H(x, Dwη ) ≤ (a(x) + η 2 )∆wη + Cη + S η (x) in Tn .
εwη + Cη + S η (x)
≥ ε(wη − uε,η ) + H(x, Dwη ) − H(x, Duε,η ) − (a(x) + η 2 )∆(wη − uε,η )
≥ ε(wη − uε,η ) + Dp H(x, Duε,η ) · D(wη − uε,η ) − (a(x) + η 2 )∆(wη − uε,η ),
=ε (wη − uε,η )θε,η dx − (div (Dp H(x, Duε,η )θε,η ) + ∆(a(x)θε,η )) (wη − uε,η ) dx
∫T ∫T
n n
Proposition 3.9. Assume that (H5), (H6) hold and the ergodic constant of (3.1) is 0.
Let uε be the solution of (E)ε , and µ ∈ M. Then, for any ε > 0,
∫∫
uε (x) dµ(x, v) ≤ 0.
Tn ×Rn
Thus, in light of properties (i), (ii) in Proposition 3.3 of µ, we integrate the above
inequality with respect to dµ(x, v) on Tn × Rn to imply
∫∫ ∫∫
εu dµ(x, v) ≤
ε
S η (x) dµ(x, v).
Tn ×Rn Tn ×Rn
Let η → 0 and use the Lebesgue dominated convergence theorem for the integral on
the right hand side of the above to complete the proof.
We remark that the key idea of Proposition 3.9 was first observed in [41, Corollary
4].
We suggest readers give the statements and the proofs of heuristic versions of
Lemma 3.8 and Proposition 3.9, in which we “assume” w, uε ∈ C ∞ (Tn ), where w
and uε are solutions of (E), (D)ε , respectively. By doing so, one will be able to see
the clear intuitions behind the complicated technicalities. To make it rigorous, as we
see in the proofs of Lemma 3.8 and Proposition 3.9, the regularizing process and the
commutation lemma in Section 3.4 play essential roles.
Proposition 3.10. Assume that (H5), (H6) hold and the ergodic constant of (3.1) is
0. Then
lim inf uε (x) ≥ u0 (x).
ε→0
64 CHAPTER 3. VANISHING DISCOUNT PROBLEM
Proof. Let φ ∈ E, that is, φ is a solution of (E) satisfying (3.12). Let φη = γ η ∗ φ for
η > 0.
Fix x0 ∈ Tn . Take two subsequences ηk → 0 and εj → 0 so that (3.14) holds,
and limj→∞ uεj (x0 ) = lim inf ε→0 uε (x0 ). Let µ be the corresponding measure satisfying
ν = Φ# µ. In view of Lemmas 3.8, 3.7,
∫ ∫
Cηk 1
u εj ,ηk
(x0 ) ≥ φ (x0 ) −
ηk ηk εj ,ηk
φ θ dx − − S ηk θεj ,ηk dx
Tn εj εj Tn
∫ 1/2
Cηk Cηk
≥ φ (x0 ) −
ηk ηk εj ,ηk
φ θ dx − − .
Tn εj εj
Let k → ∞ to imply
∫∫
u (x0 ) ≥ φ(x0 ) −
εj
φ(x)dν εj (x, p).
Tn ×Rn
Proposition 3.11. Assume that (H5), (H6) hold and the ergodic constant of (3.1) is
0. Let {εj }j∈N be any subsequence converging to 0 such that uεj uniformly converges to
a solution u of (E) as j → ∞. Then the limit u belongs to E. In particular,
Proof. In view of Proposition 3.9, it is clear that any uniform limit along subse-
quences belongs to E. By the definition of the function u0 , it is also obvious that
limj→∞ uεj (x) ≤ u0 (x).
1/2
we needed to use the estimate kS ηk kL∞ ≤ Cηk in Lemma 3.7. The pointwise conver-
gence of S ηk to 0 in Lemma 3.6 is not enough.
Secondly, M is the collection of stochastic Mather measures that can be derived
from the solutions θε,η of the adjoint equations. It should be made clear that we do not
3.4. PROOF OF THE COMMUTATION LEMMA 65
uε (x) → u
e0 (x) := sup φ(x) uniformly for x ∈ Tn as ε → 0, (3.21)
φ∈Ee
Proof of Lemma 3.6. It is important noting that, in view of Theorem 1.3 (see also [4,
Theorem 3.1]), all viscosity solutions of (E) are Lipschitz continuous with a universal
Lipschitz constant C. Therefore, we have
−C ≤ −a(x)∆w ≤ C in Tn
in the viscosity sense. The result of Ishii [48] on the equivalence of viscosity solutions
and solutions in the distribution sense for linear elliptic equations, and the simple
structure of a(x) allow us to conclude further that
wδ → w uniformly in Tn ,
∗
Dwδ * Dw weakly in L∞ (Tn ),
where
∫
R1η (x) := H(x, Dw (x)) − η
H(x + y, Dw(x + y))γ η (y) dy,
Rn
∫
R2η (x) := a(x + y)∆w(x + y)γ η (y) dy − a(x)∆wη (x).
Rn
We will provide treatments for R1η and R2η separately in Lemmas 3.12 and 3.13 below.
Note that R2η is exactly the commutation term mentioned in Section 3.2.2.
Basically, Lemma 3.12 gives that R1η (x) ≤ Cη for all x ∈ Tn and η > 0. Lemma
3.13 confirms that |R2η (x)| ≤ C for all x ∈ Tn and η > 0, and limη→0 R2η (x) = 0 for
each x ∈ Tn .
We thus set S η (x) := Cη + R2η (x) to finish the proof.
Lemma 3.12. Assume that (H5), (H6) hold. Then there exists C > 0 independent of
η such that
R1η (x) ≤ Cη for all x ∈ Tn and η > 0.
The proof goes essentially in the same way as that of the second part of Proposition
1.5. Nevertheless, we repeat it here to remind the readers of this simple but important
technique.
|H(x + y, Dw(x + y)) − H(x, Dw(x + y))| ≤ Cη for a.e. y ∈ B(x, η).
Lemma 3.13. Assume that (H5), (H6) hold. Then there exists a constant C > 0
independent of η such that |R2η (x)| ≤ C for all x ∈ Tn and η > 0. Moreover,
Proof. We first show the boundedness of R2η . By using the integration by parts,
∫
|R2 (x)| =
η
(a(x + y) − a(x))∆w(x + y)γ η (y) dy
Rn
∫ ∫
= γ (y)Da(x + y) · Dw(x + y) dy +
η
(a(x + y) − a(x))Dw(x + y) · Dγ η (y) dy
∫R R
n n
Next, we prove the last claim that, for each x ∈ Tn , limη→0 R2η (x) = 0. There are
two cases to be considered
(i) a(x) = 0, and (ii) a(x) > 0.
We handle case (i) first. Since a(x) = 0 = minTn a, we also have Da(x) = 0.
Therefore,
∫
|R2 (x)| =
η
a(x + y)∆w(x + y)γ η (y) dy
Rn
∫ ∫
= Dw(x + y) · Da(x + y)γ (y) dy +
η
Dw(x + y) · Dγ η (y)a(x + y) dy
∫R R
n n
( )
≤C |y|γ η (y) + |y|2 |Dγ η (y)| dy ≤ Cη.
Rn
The use of Taylor’s expansion of a(·) ∈ C 2 (Tn ) around x is important in the above
computation.
Let us now study case (ii), in which a(x) > 0. We choose η0 > 0 sufficiently small
such that a(z) ≥ cx > 0 for |z − x| ≤ η0 for some cx > 0. In view of (3.22), we deduce
further that
C
|∆w(z)| ≤ =: Cx for a.e. z ∈ B(x, η0 ). (3.24)
cx
Note that η0 depends on x. For η < η0 , we have
∫
|R2 (x)| =
η
(a(x + y) − a(x))∆w(x + y)γ η (y) dy
Rn
∫ ∫
≤ Cx |a(x + y) − a(x)|γ (y) dy ≤ Cx
η
|y|γ η (y) dy ≤ Cx η.
Rn Rn
In both cases, we can conclude that limη→0 |R2η (x)| = 0. Note however that the bound
for |R2η (x)| is dependent on x.
Remark 3.5. In the proof of Lemma 3.13, estimate (3.22) plays an extremely crucial
role. That is the main reason why we require w to be a solution instead of just a
subsolution of (E) so that (3.22) holds automatically. In fact, (3.22) does not hold for
subsolutions of (E) in general. This point is one of the main difference between first-
order and second-order Hamilton–Jacobi equations. For first-order Hamilton–Jacobi
equations, that is, the case a ≡ 0, estimate (3.22) holds automatically even just for
subsolutions thanks to the coercivity of H.
We also want to comment a bit more on the rate of convergence of R2η in the above
proof. For each δ > 0, set U δ := {x ∈ Tn : a(x) = 0 or a(x) > δ}. Then there exists a
constant C = C(δ) > 0 such that
|R2η (x)| ≤ C(δ)η for all x ∈ U δ .
3.4. PROOF OF THE COMMUTATION LEMMA 69
We however do not know the rate of convergence of R2η in Tn \ U δ through the above
proof yet.
With a more careful analysis, we are indeed able to improve the convergence rate of
R2η to η 1/2 in Lemma 3.7 by a more careful analysis. We do not know whether this rate
is optimal or not, but for our purpose, it is good enough. See the proof of Proposition
3.10 and the first point in Remark 3.4.
Proof of Lemma 3.7. Fix x ∈ Tn and η > 0. We consider two cases
In case (i), there exists x̄ ∈ B(x, η) such that a(x̄) ≤ η. Then, in light of (2.19) (see
also [17, Lemma 2.6]), there exists a constant C > 0 such that,
( 1/2 η )
≤C η γ (y) + η 3/2 |Dγ η (y)| dy ≤ Cη 1/2 .
Rn
Let us now consider case (ii), in which minB(x,η) a > η. A direct computation shows
∫
|R2 (x)| ≤
η
|(a(x + y) − a(x))| |∆w(x + y)| γ η (y) dy
Rn
∫ ∫
|a(x + y) − a(x)| η |Da(x + y)| · |y| η
≤C γ (y) dy ≤ C γ (y) dy + Cη
Rn a(x + y) Rn a(x + y)
∫ ∫
|y| |y| η
≤C 1/2
γ (y) dy + Cη ≤ C
η
1/2
γ (y) dy + Cη ≤ Cη 1/2 .
Rn a(x + y) Rn η
Lemma 3.14. Let w ∈ C(Tn ) satisfy (3.22). Then, w is a viscosity subsolution of (E)
if and only if w is a subsolution of (E) in the almost everywhere sense.
Proof. Assume first that w be a viscosity subsolution of (E). Then by the first part of
the proof of Lemma 3.6, w is a subsolution of (E) in the distribution sense. In light of
(3.22), w is furthermore a subsolution of (E) in the almost everywhere sense.
On the other hand, assume that w is a subsolution of (E) in the almost everywhere
sense. For each η > 0, let wη be the function defined by (3.18). In view of Lemmas 3.7,
and the stability result of viscosity solutions, we obtain that w is a viscosity subsolution
of (E).
Another consequence of Lemmas 3.6 and 3.7 is a representation formula for ergodic
constant c in this setting. If we repeat the argument in the proof of Proposition 1.5 by
using Lemmas 3.6 and 3.7, we obtain
Proposition 3.15. Assume (H5), (H6) hold. Let c be the ergodic constant of (3.1).
Then,
c = inf2 n
maxn (−a(x)∆φ(x) + H(x, Dφ(x))) .
φ∈C (T ) x∈T
3.5 Applications
Let us now discuss the limit of uε in Examples 3.1, 3.3 and 3.4 in Section 3.1.
Thus,
{
e (x) ≤ sup w(x) : w is a solution to (3.2) s.t.
u0
∫∫
{ } { }}
w dµ(x, v) ≤ 0, ∀ µ ∈ δ{1/4}×{0} ∪ δ{3/4}×{0}
Tn ×Rn
{ ( ) ( ) }
= sup w(x) : w is a solution to (3.2) s.t. w 1/4 ≤ 0, w 3/4 ≤ 0 ,
3.5. APPLICATIONS 71
which implies ue0 (1/4) ≤ 0 and u e0 (3/4) ≤ 0. On the other hand, noting that 0 is a
subsolution of (D)ε , by the comparison principle, we have uε ≥ 0 in T, which implies
e0 ≥ 0 in R. Thus, we obtain u
u e0 (1/4) = 0 and u
e0 (3/4) = 0, and therefore
e0 = u0 = u01 = u02 ,
u
where u01 , u02 are the functions defined in Example 3.1.
uniformly for x ∈ T as ε → 0.
We emphasize here that the characterization of Mather measures is very hard in
general. Indeed, it is still not known yet whether characterization (3.26) holds or not in
the multi-dimensional cases even in this specific form. See Section 3.6 for some further
discussions.
72 CHAPTER 3. VANISHING DISCOUNT PROBLEM
f
δ{0}×{0} , δ{1/4}×{0} , δ{3/4}×{0} ∈ M,
we obtain
uε → uα uniformly for x ∈ T as ε → 0,
where uα and α are the function and the constant given by (3.6) and (3.5), respectively.
See Figure 3.4.
Similarly, we can characterize the limit of the discount approximation for Example
3.4. Noting that
f
δ{0}×{0} , δ{1/4}×{0} , δ{1/2}×{0} ∈ M,
we obtain
uε → uα uniformly for x ∈ T as ε → 0,
where uα and α are the function and the constant given by (3.7) and (3.5), respectively.
See Figure 3.6.
(ii) Selection problems for nonconvex Hamilton–Jacobi equations: Most problems are
open. In some examples, invariant measures and invariant sets do not exist (see
Cagnetti, Gomes, Tran [18] for the discussion on Mather measures, and Gomes,
Mitake, Tran [43] for the discussion on Aubry set). It is therefore extremely
challenging to establish general convergence results and to describe the limits if
they exist. Gomes, Mitake, Tran [43] proved convergence results for some special
nonconvex first-order cases in 2016.
(iii) Rate of convergence: It is quite challenging to obtain some rates of the con-
vergence (quantitative results) of Theorem 3.1. Mitake, Soga [67] studied this
for some special first-order situations in 2016. It is demonstrated there that er-
ror estimates would depend highly on dynamics of the corresponding dynamical
systems in general.
(iv) Aubry (uniqueness) set: The structure of solutions of (1.8) is poorly understood.
For instance, in the case of the inviscid (first-order) equation, the Aubry set
plays a key role as a uniqueness set for the ergodic problem. In a general viscous
case where the diffusion could be degenerate, there has not been any similar
notions/results on the uniqueness (Aubry) set for (1.8) up to now.
(v) Commutation lemma 3.6: Another way to perform this task is to do sup-inf-
convolution first, and usual convolution later. Ishii, Mitake, Tran did this in a
unpublished note first before finding the new variational approach in [51]. Are
these useful in other contexts?
(vi) Applications: Theorem 3.1 is very natural in its own right. It is therefore ex-
tremely interesting to use it to get some further PDE results and to find connec-
tions to dynamical systems.
(ii) Finite difference approximation: In [75], the selection problem which appears
in the finite difference procedure was first formulated by Soga, and the conver-
gence was also proven there in a similar setting to that of the vanishing viscosity
procedure.
74 CHAPTER 3. VANISHING DISCOUNT PROBLEM
Appendix
The readers can read Appendix independently from other chapters. In Appendix, we
give a short introduction to the theory of viscosity solutions of first-order Hamilton–
Jacobi equations, which was introduced by Crandall and Lions [24] (see also Crandall,
Evans, and Lions [22]). The readers can use this as a starting point to learn the theory
of viscosity solutions. Some of this short introduction is taken from the book of Evans
[31]. Let us for simplicity focus on the initial-value problem of first-order (inviscid)
Hamilton–Jacobi equations
{
ut + H(x, Du) = 0 in Rn × (0, ∞),
(C)
u(x, 0) = u0 (x) on Rn ,
where the Hamiltonian H : Rn × Rn → R and the initial function u0 : Rn → R are
given. We will give precise assumptions on H and u0 when necessary.
The original approach [24, 22, 60] is to consider the following approximated equation
{ ε
ut + H(x, Duε ) = ε∆uε in Rn × (0, ∞),
(C)ε
uε (x, 0) = u0 (x) on Rn ,
for ε > 0. The term ε∆uε in (C)ε regularizes the Hamilton–Jacobi equation, and this
is called the method of vanishing viscosity. We then let ε → 0 and study the limit of
the family {uε }ε>0 . It is often the case that, in light of a priori estimates, {uε }ε>0 is
bounded and equicontinuous on compact subsets of Rn × [0, ∞). We hence can use the
Arzelà-Ascoli theorem to deduce that, there exists a subsequence {εj }j converging to
0 as j → ∞ such that,
for some limit function u ∈ C(Rn × [0, ∞)). We expect that u is some kind of solution
of (C), but we only have that u is continuous and absolutely no information about Du
and ut . Also as (C) is fully nonlinear in Du and not of the divergence structure, we
cannot use integration by parts and weak convergence techniques to justify that u is
a weak solution in such sense. We instead use the maximum principle to obtain the
notion of weak solution, which is viscosity solution.
75
76 CHAPTER 4. APPENDIX
The terminology viscosity solutions is used in honor of the vanishing viscosity tech-
nique (see the proof of Theorem 4.1 in Section 4.2). We can see later that the definition
of viscosity solutions does not involve viscosity of any kind but the name remains be-
cause of the history of the subject. We refer to [6, 23, 31] for general theory of viscosity
solutions.
where V is the normal velocity at each point on Γ(t), and h ∈ C(Rn ) is a given positive
function. In this section, we consider the case where Γ(t) is described by the following
graph
Γ(t) = {(x, u(x, t)) : x ∈ Rn }
for a real-valued auxiliary function u : Rn × [0, ∞) → R.
Figure 4.1
Figure 4.1 shows an example of Γ(t) and how it evolves. We note that the direction
xn+1 in the picture shows the positive direction of V . The function h is decided by the
phenomenon which we want to consider and it sometimes depends on the curvatures,
the time, etc. We simply consider the situation that h depends only on the x variable
4.1. MOTIVATION AND EXAMPLES 77
here. We refer to [38] for many interesting applications appearing in front propagation
problems.
Suppose that everything is smooth, and then by elementary calculations, we get
( ) ( )
0 1 −Du ut
V = ~v · ~n = ·√ =√ ,
ut 1 + |Du|2 1 1 + |Du|2
where ~v denotes the velocity in the direction xn+1 . Plug this into (4.1), we get that u
is a solution to the Hamilton–Jacobi equation
√
ut + h(x) 1 + |Du|2 = 0 in Rn × (0, ∞).
Example 4.1. We consider the simplest case where n = 1, h(x) ≡ 1 and two initial
data: (i) a line in Figure 4.2, (ii) a curve in Figure 4.3.
Figure 4.2
Figure 4.3
In the context of large time behavior (Chapter 2), the large time limit (asymptotic
profile), if exists, is a solution to the associated ergodic problem. We also observe that
it depends on the initial data as demonstrated in Figures 4.2, 4.3. In general, it is
highly nontrivial to characterize this dependence as we deal with nonlinear equations.
Section 2.5 somehow gives an answer to this question (see Examples 2.2, 2.3 in Section
2.5).
78 CHAPTER 4. APPENDIX
Inviscid cases.
We consider the optimal control problem, for fixed (x, t) ∈ Rn × [0, ∞),
∫ t
Minimize L(γ(s), −γ̇(s)) ds + u0 (γ(t))
0
over all controls γ ∈ AC ([0, t], Rn ) with γ(0) = x. Here u0 is a given bounded uniformly
continuous function on Rn . We denote by u(x, t) the minimum cost. It can be proven
that u solves the following Cauchy problem:
{
ut + H(x, Du) = 0 in Rn × (0, ∞)
u(x, 0) = u0 (x) on Rn ,
where the Hamiltonian H is the Legendre transform of the Lagrangian L, that is,
Let us show a quick formal proof of this. Note first that u satisfies the so-called dynamic
programming principle, that is, for any h > 0,
{∫ h }
u(x, t + h) = inf L(γ(s), −γ̇(s)) ds + u(γ(h), t) : γ(0) = x . (4.2)
0
where we denote a minimizer of the minimizing problem for u(x, t + h) by γ ∗ , and set
δ(s) := γ ∗ (s + h) for s ∈ [−h, t], and we used the Bellman principle.
We rewrite it as
∫
u(δ(−h), t + h) − u(δ(0), t) 1 h
= L(γ(s), −γ̇(s)) ds.
h h 0
4.1. MOTIVATION AND EXAMPLES 79
Sending h → 0 yields
( ) ( )
ut + Du · −δ̇(0) − L x, −δ̇(0) = 0,
which more or less implies the conclusion. We can use this formal idea to give a rigorous
proof by performing careful computations and using the notion of viscosity solutions.
We refer to [31, 5] for details for instance.
Example 4.2 (Classical mechanics). We consider the case that L is the difference
between a kinetic energy and a potential energy, i.e., L(x, v) := |v|2 /2 − V (x) for a
given function V which is uniformly bounded continuous on Rn . Then,
{∫ t [ ] }
1
u(x, t) = inf |γ̇(s)| − V (γ(s)) ds + u0 (γ(t)) : γ ∈ AC ([0, t], R ), γ(0) = x
2 n
0 2
solves the following Cauchy problem
u + 1 |Du|2 + V (x) = 0 in Rn × (0, ∞),
t
2
u(·, 0) = u on Rn .
0
{ (
0
) }
x−y
= infn tL + u0 (y) .
y∈R t
It is quite straightforward to prove that the first line implies the second line in the
above by using Jensen’s inequality. We leave it to the interested readers. See [31] for
instance. We also refer to [5, 20] for the connections between the theory of viscosity
solutions and optimal control theory.
Example 4.4 (Discounted approximation). Fix δ > 0. For x ∈ Rn , define
{∫ ∞ }
−δs
δ
v (x) = inf e L(γ(s), −γ̇(s)) ds : γ ∈ AC ([0, ∞), R ), γ(0) = x .
n
0
This is an infinite horizon problem in optimal control theory. The function v δ satisfies
the dynamic programming principle
{∫ h }
−δs −δh δ
δ
v (x) = inf e L(γ(s), −γ̇(s)) ds + e v (γ(h)) : γ ∈ AC ([0, h], R ), γ(0) = x
n
0
for any h > 0. We can use this to check that v δ solves the following discount Hamilton–
Jacobi equation
δv δ + H(x, Dv δ ) = 0 in Rn .
In the formula of v δ , the function e−δs plays a role of discount, and therefore, the
constant δ in the above formula is called the discount factor in optimal control theory.
80 CHAPTER 4. APPENDIX
Viscous cases.
where A(x) := σ(x)σ T (x), by using the dynamic programming principle, the Itô for-
mula and the notion of viscosity solutions.
We refer to [36, 74] for the connections between the theory of viscosity solutions
and stochastic optimal control theory.
4.2 Definitions
Let us now introduce the definitions of viscosity subsolutions, supersolutions, and so-
lutions. These definitions are encoded naturally in the vanishing viscosity method (see
the proof of Theorem 4.1 below).
• u(·, 0) ≤ u0 on Rn ,
• u(·, 0) ≥ u0 on Rn ,
4.2. DEFINITIONS 81
Remark 4.1. (i) In Definition 4.1, a local maximum (resp., minimum) can be replaced
by a maximum (resp., minimum) or even by a strict maximum (resp., minimum).
Besides, a C 1 test function v can be replaced by a C ∞ test function v as well.
(ii) For (x0 , t0 ) ∈ Rn × (0, ∞), we set
The sets D+ u(x0 , t0 ), D− u(x0 , t0 ) are called the superdifferential and subdifferential of
u at (x0 , t0 ), respectively. We can rewrite the definitions of viscosity subsolutions and
supersolutions by using the superdifferential and subdifferential, respectively (see [22]).
We also give the definitions of viscosity subsolutions, supersolutions, and solutions
to the following second order equation
{
ut + F (x, Du, D2 u) = 0 in Rn × (0, ∞),
(4.3)
u(x, 0) = u0 (x) on Rn .
• u(·, 0) ≤ u0 on Rn ,
a + F (x0 , p, X) ≤ 0,
where
• u(·, 0) ≥ u0 on Rn ,
82 CHAPTER 4. APPENDIX
a + F (x0 , p, X) ≥ 0,
where
We call J 2,+ u(x0 , t0 ) and J 2,− u(x0 , t0 ) the super-semijet and sub-semijet of u at
(x0 , t0 ), respectively.
Remark 4.2. In Definition 4.2, J 2,± u(x0 , t0 ) can be replaced by the closure of these
sets, which are defined as
2,±
J u(x0 , t0 ) := {(a, p, X) ∈ R × Rn × Mn×n
sym : ∃ (xk , tk , ak , pk , Xk ) s.t.
(ak , pk , Xk ) ∈ J 2,± u(xk , tk ) and (xk , tk , ak , pk , Xk ) → (x0 , t0 , a, p, X) as k → ∞}.
We here give a precise result concerning the vanishing viscosity method explained
in the introduction of this appendix. It shows that Definition 4.1 arises naturally in
light of this procedure and the maximum principle. We will verify the assumption in
this theorem in Section 4.6 below.
Theorem 4.1 (Vanishing viscosity method). Let uε be the smooth solution of (C)ε for
ε > 0. Assume that there exists a subsequence {uεj }j such that
Proof. We only prove that u is a viscosity subsolution of (C) as similarly we can prove
that it is a viscosity supersolution of (C). Take any ϕ ∈ C ∞ (Rn × (0, ∞)) and assume
that u − ϕ has a strict maximum at (x0 , t0 ) ∈ Rn × (0, ∞).
Recall that uεj → u locally uniformly as j → ∞. For j large enough, uεj − ϕ has a
local maximum at some point (xj , tj ) and
(xj , tj ) → (x0 , t0 ), as j → ∞.
We have
ε
ut j (xj , tj ) = ϕt (xj , tj ), Duεj (xj , tj ) = Dϕ(xj , tj ), −∆uεj (xj , tj ) ≥ −∆ϕ(xj , tj ).
4.3. CONSISTENCY 83
Hence,
ε
ϕt (xj , tj ) + H(xj , Dϕ(xj , tj )) = ut j (xj , tj ) + H(xj , Duεj (xj , tj ))
= εj ∆uεj (xj , tj ) ≤ εj ∆ϕ(xj , tj ).
Remark 4.3.
(i) Let us emphasize that obtaining viscosity solutions through the vanishing viscosity
approach is the classical approach. This method does not work for general second-order
equations. In general, we can use Perron’s method to prove the existence of viscosity
solutions.
(ii) As seen in the proof of Theorem 4.1, we lose the information of Duε and ∆uε
as ε → 0 in this argument. Evans [32] introduced the nonlinear adjoint method to
understand these in the vanishing viscosity procedure. In particular, his aim is to
understand gradient shock structures in the nonconvex setting.
4.3 Consistency
We here prove that the notion of viscosity solutions is consistent with that of classical
solutions.
Firstly, it is quite straightforward to see that if u ∈ C 1 (Rn × [0, ∞)) solves (C)
in the classical sense, then u is a viscosity solution of (C). Next, we show that if a
viscosity solution is differentiable at some point, then it solves (C) there. We need the
following lemma.
We define ∫ 2|x|
ϕ(x) = ρ(r)dr + |x|2 , for x ∈ Rm . (4.6)
|x|
ϕ(0) = 0, Dϕ(0) = 0.
Lemma 4.4 (Extrema at a terminal time). Fix T > 0. Assume that u is a viscosity
subsolution (resp., supersolution) of (C). Assume further that, on Rn × (0, T ], u − ϕ
has a local maximum (resp., minimum) at a point (x0 , t0 ) ∈ Rn × (0, T ], for some
ϕ ∈ C 1 (Rn × [0, ∞)). Then
Proof. We just need to verify the case of subsolution. Assume u − ϕ has a strict
maximum at (x0 , T ). We define
ε
ϕ(x, t) = ϕ(x, t) + for (x, t) ∈ Rn × (0, ∞)
T −t
4.4. COMPARISON PRINCIPLE AND UNIQUENESS 85
for ε > 0. If ε > 0 is small enough, then u − ϕ has a local maximum at (xε , tε ) ∈
Rn × (0, T ) and (xε , tε ) → (x0 , T ) as ε → 0. By definition of viscosity subsolutions, we
have
ϕt (xε , tε ) + H(xε , Dϕ(xε , tε )) ≤ 0
which is equivalent to
ε
ϕt (xε , tε ) + + H(xε , Dϕ(xε , tε )) ≤ 0.
(T − tε )2
Hence,
ϕt (xε , tε ) + H(xε , Dϕ(xε , tε )) ≤ 0.
We let ε → 0 to achieve the result.
Let us first give a formal argument to see how the comparison principle works. Let
u, v be a smooth subsolution and a smooth supersolution to (C), respectively, with
the same initial data. Our goal is to prove that u ≤ v on Rn × [0, T ]. We argue by
contradiction, and therefore we suppose that
for a small λ > 0. Suppose formally that the maximum is attained at (x0 , t0 ) ∈
Rn × [0, T ). Because of the initial data, we have t0 > 0. Then,
Thus,
which is a contradiction.
We now establish the comparison principle (hence uniqueness) for (C) rigorously
by using the so-called doubling variable argument, which was originally introduced by
Kružkov [57].
We assume further that the Hamiltonian H satisfies
Theorem 4.5 (Comparison Principle for (C)). Assume that (A1) holds. If u, ũ are a
bounded uniformly continuous viscosity subsolution, and supersolution of (C) on Rn ×
[0, T ], respectively, then u ≤ ũ on Rn × [0, T ].
Proof. We assume by contradiction that
Hence,
C
|x0 − y0 | + |t0 − s0 | ≤ Cε, |x0 | + |y0 | ≤ . (4.8)
ε1/2
We next use Φ(x0 , y0 , t0 , s0 ) ≥ Φ(x0 , x0 , t0 , t0 ) to deduce that
1
(|x0 − y0 |2 + (t0 − s0 )2 ) ≤ ũ(x0 , t0 ) − ũ(y0 , s0 ) + λ(t0 − s0 ) + ε(x0 − y0 ) · (x0 + y0 ).
ε2
In view of (4.8) and the uniformly continuity of ũ, we get
|x0 − y0 |2 + (t0 − s0 )2
→ 0, |x0 − y0 | + |t0 − s0 | = o(ε) as ε → 0. (4.9)
ε2
By (4.8) and (4.9), we can take ε > 0 small enough so that s0 , t0 ≥ µ > 0 for some
µ > 0.
Notice that (x, t) 7→ Φ(x, y0 , t, s0 ) has a maximum at (x0 , t0 ). In view of the defini-
tion of Φ, u − ϕ has a maximum at (x0 , t0 ) for
1
ϕ(x, t) := ũ(y0 , s0 ) + λ(t + s0 ) + (|x − y0 |2 + (t − s0 )2 ) + ε(|x|2 + |y0 |2 ).
ε2
By definition of viscosity subsolutions,
( )
2(t0 − s0 ) 2(x0 − y0 )
λ+ + H x0 , + 2εx0 ≤ 0. (4.10)
ε2 ε2
4.4. COMPARISON PRINCIPLE AND UNIQUENESS 87
Similarly, by using the fact that (y, s) 7→ Φ(x0 , y, t0 , s) has a maximum at (y0 , s0 ), we
obtain that ( )
2(t0 − s0 ) 2(x0 − y0 )
−λ + + H y0 , − 2εy0 ≥ 0. (4.11)
ε2 ε2
Subtracting (4.11) from (4.10), and using (4.8) and (A1) to get
( ) ( )
2(x0 − y0 ) 2(x0 − y0 )
2λ ≤ H y0 , − 2εy0 − H x0 , + 2εx0
ε2 ε2
( )
2(x0 − y0 )
≤ C|x0 − y0 | 1 + + 2ε|y0 | + Cε|x0 − y0 |.
ε2
Remark 4.4. In Theorem 4.5, we assume that u, ũ are uniformly continuous just
to make the proof simple and clean. In fact, the comparison principle holds for the
general case that u, ũ are a bounded viscosity subsolution in USC (Rn × [0, T ]), and
supersolution in LSC (Rn × [0, T ]) of (C) on Rn × [0, T ], respectively. The proof for
the general case follows the same philosophy as the above one. We leave this to the
interested readers to complete.
By using the comparison principle above, we obtain the following uniqueness result
immediately.
Theorem 4.6 (Uniqueness of viscosity solution). Under assumption (A1) there exists
at most one bounded uniformly continuous viscosity solution of (C) on Rn × [0, T ].
Theorem 4.7. Assume that (A1) holds. If v, ṽ are a bounded uniformly continuous
viscosity subsolution, and supersolution of
v + H(x, Dv) = 0 in Rn ,
respectively, then v ≤ ṽ on Rn .
Theorem 4.8. Assume that (A1) holds. If v, ṽ are, respectively, a bounded uniformly
continuous viscosity subsolution, and supersolution of
Since the proofs of Theorems 4.7, 4.8 are similar to that of Theorem 4.5, we omit
them.
88 CHAPTER 4. APPENDIX
4.5 Stability
It is really important mentioning that viscosity solutions remain stable under the L∞ -
norm. The following proposition shows this basic fact.
Proposition 4.9. Let {Hk }k∈N ⊂ C(Rn × Rn ) and {gk }k∈N ⊂ C(Rn ). Assume that
Hk → H, gk → g locally uniformly in Rn × Rn and in Rn , respectively, as k → ∞
for some H ∈ C(Rn × Rn ) and g ∈ C(Rn ). Let {uk }k∈N be viscosity solutions of
the Hamilton–Jacobi equations corresponding to {Hk }k∈N with uk (·, 0) = gk . Assume
furthermore that uk → u locally uniformly in Rn × [0, ∞) as k → ∞ for some u ∈
C(Rn × [0, ∞)). Then u is a viscosity solution of (C).
Proof. It is enough to prove that u is a viscosity subsolution of (C). Take φ ∈ C 1 (Rn ×
[0, ∞)) and assume that u − φ has a strict maximum at (x0 , t0 ) ∈ Rn × (0, ∞). By
the hypothesis, for k large enough, uk − φ has a maximum at some point (xk , tk ) ∈
Rn × (0, ∞) and (xk , tk ) → (x0 , t0 ) as k → ∞. By definition of viscosity subsolutions,
we have
φt (xk , tk ) + Hk (xk , Dφ(xk , tk )) ≤ 0.
We let k → ∞ to obtain the result.
for simplicity.
Proposition 4.10.
(i) Let S − be a collection of subsolutions of (4.12). Define the function u on Rn by
u(x) := sup{v(x) : v ∈ S − }.
u(x) := inf{v(x) : v ∈ S + }.
Indeed, we observe
2
u(xk ) + − φ(x0 ) > (vk − φ)(x0 ) ≥ (vk − φ)(yk )
k
≥ (u − φ)(yk ) ≥ (u − φ)(x0 ).
From the above, we have (vk − φ)(yk ) → (u − φ)(x0 ) and (u − φ)(yk ) → (u − φ)(x0 )
as k → ∞. We consider any convergent subsequence {ykj }j∈N and y0 denotes its limit
point. Noting that u is lower semicontinuous, (u − φ)(x0 ) = lim inf k→∞ (u − φ)(yk ) ≥
(u − φ)(y0 ), which guarantees yk → x0 as k → ∞. Moreover, we get vk (yk ) → u(x0 ) as
k → ∞.
Now, by definition of viscosity supersolutions, we have
u(x) = lim sup ∗ uα (x) := lim sup{uβ (y) : |x − y| ≤ 1/β, β ≥ α}, (4.13)
α→∞ α→∞
u(x) = lim inf ∗ uα (x) := lim inf{uβ (y) : |x − y| ≤ 1/β, β ≥ α}. (4.14)
α→∞ α→∞
We call u and u the upper half-relaxed limit and the lower half-relaxed limit of uα
as α → ∞, respectively. Note that u and u are upper and lower semicontinuous,
respectively. We show some stability properties of u and u.
Theorem 4.11. Let {Hα }α∈R ⊂ C(Rn × Rn ). Assume that Hα → H locally uniformly
in Rn × Rn as α → ∞ for some H ∈ C(Rn × Rn ). Let {uα }α∈R ⊂ C(Rn ) be a family of
locally uniformly bounded functions, which are solutions of (4.12). Then the half-relaxed
limits u and u are a subsolution and a supersolution of (4.12), respectively.
Lemma 4.12. Let {uα }α∈R ⊂ C(Rn ) be a family of locally uniformly bounded functions,
u, u be the functions defined by (4.13) and (4.14), respectively. Assume that u − ϕ
takes a strict maximum (resp., u − ϕ takes a strict minimum) at some x0 ∈ Rn and
ϕ ∈ C 1 (Rn ). Then there exist {xk }k∈N ⊂ Rn converging to x0 and {αk }k∈N converging
to infinity such that uαk − ϕ attains a local maximum (resp., minimum) at xk ∈ Rn ,
and uαk (xk ) → u(x0 ) (resp., uαk (xk ) → u(x0 )) as k → ∞.
Proof. We only deal with the case of u. Choose {yk }k∈N ⊂ Rn so that yk → x0 and
uαk (yk ) → u(x0 ). Let xk ∈ B(x0 , r) be a maximum point of uαk − ϕ on B(x0 , r) for
r > 0. By replacing the sequence by its subsequence if necessary, we may assume
that xkj → x ∈ B(x0 , r) and uαkj (xkj ) → a ∈ R as j → ∞. Noting that uαkj (ykj ) ≤
uαkj (xkj ), sending j → ∞ yields (u − ϕ)(x0 ) ≤ a − ϕ(x) ≤ (u − ϕ)(x). Since x0 is
a strict maximum point of u − ϕ on Rn , we see that x = x0 . Moreover, we see that
a = u(x0 ). Therefore, we obtain uαkj − ϕ attains a local maximum at xkj , xkj → x0
and uαkj (xkj ) → u(x0 ) as j → ∞.
90 CHAPTER 4. APPENDIX
Proposition 4.13. Let {uα }α∈R ⊂ C(Rn ) be uniformly bounded in Rn . Assume that
u = u =: u on K for a compact set K ⊂ Rn . Then, u ∈ C(K) and uα → u uniformly
on K as α → ∞.
Proof. It is clear that u ∈ C(K). Suppose that uα does not uniformly converges to
u on K. Then, there exist ε0 > 0, a sequence {αk }k∈N converging to infinity, and a
sequence {xk }k∈N ⊂ K such that |uαk (xk ) − u(xk )| > ε0 for any k ∈ N. Since K is
a compact set, we can extract a subsequence {xkj }j∈N such that xkj → x0 ∈ K as
j → ∞. Sending j → ∞ to get
which is a contradiction.
Remark 4.5. The idea of using half-relaxed limits arises naturally when attempting
to pass to the limits with maxima and minima. This result is in particular powerful
when it can be used with a comparison principle. If the comparison principle holds for
the limit equation, then a straightforward consequence of Theorem 4.11, Proposition
4.13 is that uα → u locally uniformly on Rn as α → ∞ for some u ∈ C(Rn ).
If we have further that H is convex in p, then we are able to obtain more stability
results. Assume that
Proposition 4.14. Assume that (A2) holds. Let u be a Lipschitz continuous func-
tion on Rn . Then, u is a viscosity subsolution of (4.12) if and only if u satisfies
H(x, Du(x)) ≤ 0 for almost every x ∈ Rn .
Proof. In light of Theorem 4.3 and Rademacher’s theorem, we can easily see that if u
is a viscosity subsolution of (4.12), then u satisfies H(x, Du(x)) ≤ 0 for a.e. x ∈ Rn .
Conversely, if u satisfies H(x, Du(x)) ≤ 0 for a.e. x ∈ Rn , then by mollification, for
each ε > 0, we can construct a smooth function uε satisfying H(x, Duε ) ≤ Cε in Rn as
in the proof of Proposition 1.5. Furthermore, uε → u locally uniformly in Rn as ε → 0.
Thus, in light of the stability result, Proposition 4.9, we obtain the conclusion.
4.6. LIPSCHITZ ESTIMATES 91
Corollary 4.15. Assume that (A2) holds. Let u be a Lipschitz continuous function on
Rn . Then, u is a viscosity solution of (4.12) if and only if H(x, p) = 0 for any x ∈ Rn ,
p ∈ D− u(x).
Proof. We only need to prove that u is a viscosity subsolution of (4.12) if and only if
H(x, p) ≤ 0 for any x ∈ Rn , p ∈ D− u(x). By Proposition 4.14, we have
H(x, Du(x)) ≤ 0 in Rn in the viscosity sense.
⇐⇒ H(x, Du(x)) ≤ 0 for almost every x ∈ Rn .
⇐⇒ H(x, −Dv(x)) ≤ 0 for almost every x ∈ Rn ,
where v(x) = −u(x) for all x ∈ Rn .
⇐⇒ H(x, −Dv(x)) ≤ 0 in Rn in the viscosity sense.
⇐⇒ H(x, −p) ≤ 0 for any x ∈ Rn , p ∈ D+ v(x).
⇐⇒ H(x, q) ≤ 0 for any x ∈ Rn , q ∈ D− u(x).
Sketch of proof. We first note that for C > 0 sufficiently large, u0 ±Ct are, respectively,
a supersolution and a subsolution of (C)ε . By the comparison principle, we get
u0 − Ct ≤ uε ≤ u0 + Ct.
This, together with the comparison principle once more, yields that kut kL∞ (Rn ×[0,∞)) ≤
C.
Next, set φ := |Duε |2 /2. We have
Thus,
φt + Dp H · Dφ + (H(x, Duε )2 + Dx H · Duε − C) ≤ ε∆φ.
By the maximum principle and (A3), we get the desired result.
Theorem 4.18. Assume that (A1), (A3), (A4) hold. Then, we obtain
for u ∈ C(Rn × [0, ∞)), where u is the unique viscosity solution of (C).
Proof. In view of Lemma 4.17, there exists a subsequence {uεj }j∈N such that
for some u ∈ C(Rn × [0, ∞)), which is Lipschitz. In particular, u is bounded uniformly
continuous on Rn × [0, T ] for each T > 0.
By Theorems 4.1 and 4.6, we see that u is the unique viscosity solution of (C). This
implies further that uε → u locally uniformly on Rn × [0, ∞) as ε → 0.
The above construction of solutions is called Perron’s method. The use of this
method in the area of viscosity solutions was introduced by Ishii [47]. For simplicity
in this proof, we will assume u is continuous.
4.7. THE PERRON METHOD 93
[7] G. Barles, H. Ishii, H. Mitake, On the large time behavior of solutions of Hamilton–
Jacobi equations associated with nonlinear boundary conditions, Arch. Ration.
Mech. Anal. 204 (2012), no. 2, 515–558.
[8] G. Barles, H. Ishii, H. Mitake, A new PDE approach to the large time asymptotics
of solutions of Hamilton-Jacobi equation, Bull. Math. Sci. 3 (2013), 363–388.
95
96 BIBLIOGRAPHY
[11] G. Barles, J.-M. Roquejoffre, Ergodic type problems and large time behaviour of un-
bounded solutions of Hamilton-Jacobi equations, Comm. Partial Differential Equa-
tions 31 (2006), no. 7-9, 1209–1225.
[17] F. Cagnetti, D. Gomes, H. Mitake, H. V. Tran, A new method for large time
behavior of convex Hamilton–Jacobi equations: degenerate equations and weakly
coupled systems, Annales de l’Institut Henri Poincaré - Analyse non linéaire 32
(2015), 183–200.
[19] F. Camilli, O. Ley, P. Loreti, and V. D. Nguyen, Large time behavior of weakly
coupled systems of first-order Hamilton–Jacobi equations, NoDEA Nonlinear Dif-
ferential Equations Appl., 19 (6), (2012), 719–749.
[21] A. Cesaroni, M. Novaga, Long-time behavior of the mean curvature flow with pe-
riodic forcing, Comm. Partial Differential Equations 38 (2013), no. 5, 780–801.
[23] M. G. Crandall, H. Ishii, and P.-L. Lions. User’s guide to viscosity solutions of
second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27(1):1–
67, 1992.
BIBLIOGRAPHY 97
[27] A. Davini, M. Zavidovique, Aubry sets for weakly coupled systems of Hamilton-
Jacobi equations, SIAM J. Math. Anal. 46 (2014), no. 5, 3361–3389.
[28] R. J. Di Perna, P.-L. Lions, Ordinary differential equations, transport theory and
Sobolev spaces, Invent. Math. 98, 511–547 (1989).
[29] W. E, Aubry-Mather theory and periodic solutions of the forced Burgers equation,
Comm. Pure Appl. Math. 52 (1999), no. 7, 811–828.
[30] L. C. Evans, Weak convergence methods for nonlinear partial differential equations.
CBMS 74, American Mathematical Society, 1990.
[33] L. C. Evans, Towards a quantum analog of weak KAM theory, Comm. Math. Phys.
244 (2004), no. 2, 311–334.
[36] W. H. Fleming and H. M. Soner, Controlled Markov processes and viscosity so-
lutions, Stochastic Modelling and Applied Probability, 25. Springer, New York,
2006.
[40] D. A. Gomes, Duality principles for fully nonlinear elliptic equations, Trends in
partial differential equations of mathematical physics, 125–136, Progr. Nonlinear
Differential Equations Appl., 61, Birkhauser, Basel, 2005.
[41] D. A. Gomes, Generalized Mather problem and selection principles for viscosity
solutions and Mather measures, Adv. Calc. Var., 1 (2008), 291–307.
[43] D. Gomes, H. Mitake, H. V. Tran, The Selection problem for discounted Hamilton-
Jacobi equations: some non-convex cases, arXiv:1605.07532 [math.AP], submitted.
[44] N. Ichihara Large time asymptotic problems for optimal stochastic control with
superlinear cost, Stochastic Process. Appl. 122 (4) (2012), 1248–1275.
[47] H. Ishii, Perron’s method for Hamilton–Jacobi equations, Duke Math Journal, 55
(1987), no. 2, 369–384.
[48] H. Ishii, On the equivalence of two notions of weak solutions, viscosity solutions
and distribution solutions, Funkcial. Ekvac. 38 (1995), no. 1, 101–120.
[51] H. Ishii, H. Mitake, H. V. Tran, The vanishing discount problem and viscosity
Mather measures. Part 1: the problem on a torus, Journal Math. Pures Appl., to
appear.
[52] H. Ishii, H. Mitake, H. V. Tran, The vanishing discount problem and viscosity
Mather measures. Part 2: boundary value problems, Journal Math. Pures Appl.,
to appear.
BIBLIOGRAPHY 99
[54] H. R. Jauslin, H. O. Kreiss and J. Moser, On the forced Burgers equation with
periodic boundary conditions, Proc. Symp. Pure Math. 65 (1999), 133–153.
[55] R. Jensen, The maximum principle for viscosity solutions of fully nonlinear second
order partial differential equations, Arch. Rat. Mech. Anal. 101 (1988), 1–27.
[56] W.-L. Jin, Y. Yu, Asymptotic solution and effective Hamiltonian of a Hamilton–
Jacobi equation in the modeling of traffic flow on a homogeneous signalized road,
J. Math. Pures Appl. (9) 104 (2015), no. 5, 982–1004.
[57] S. N. Kružkov, Generalized solutions of nonlinear equations of the first order with
several independent variables. II, (Russian) Mat. Sb. (N.S.) 72 (114) 1967 108–134.
[58] O. Ley, V. D. Nguyen, Large time behavior for some nonlinear degenerate parabolic
equations, J. Math. Pures Appl. 102 (2014), 293–314.
[59] P.-L. Lions, Control of Diffusion Processes in RN , Comm. Pure Appl. Math. 34
(1981), 121–147.
[63] J. N. Mather, Action minimizing invariant measures for positive definite La-
grangian systems, Math. Z. 207 (1991), no. 2, 169–207.
[67] H. Mitake, K. Soga, Weak KAM theory for discount Hamilton-Jacobi equations
and its application, preprint.
100 BIBLIOGRAPHY
[68] H. Mitake, H. V. Tran, Remarks on the large time behavior of viscosity solutions of
quasi-monotone weakly coupled systems of Hamilton–Jacobi equations, Asymptot.
Anal. 77 (2012), 43–70.
[70] H. Mitake, H. V. Tran, Large-time behavior for obstacle problems for degenerate
viscous Hamilton-Jacobi equations, Calc. Var. Partial Differential Equations 54
(2015) 2039–2058.
[72] G. Namah, J.-M. Roquejoffre, Remarks on the long time behaviour of the solutions
of Hamilton–Jacobi equations, Comm. Partial Differential Equations 24 (1999), no.
5-6, 883–893.
[73] V. D. Nguyen, Some results on the large time behavior of weakly coupled systems
of first-order Hamilton–Jacobi equations, J. Evol. Equ. 14 (2014), 299–331.
[75] K. Soga, Selection problems of Z2 -periodic entropy solutions and viscosity solu-
tions, preprint (arXiv:1501.03594).
[78] H. V. Tran, Adjoint methods for static Hamilton-Jacobi equations, Calc. Var. Par-
tial Differential Equations 41 (2011), 301–319.
2
[79] Y. Yu, A remark on the semi-classical measure from − h2 ∆ + V with a degenerate
potential V , Proc. Amer. Math. Soc. 135 (2007), no. 5, 1449–1454.