0% found this document useful (0 votes)
99 views19 pages

Brownian Motion and Stochastic Calculus Introductory Notes: Roberto Rigobon MIT Fall 2009

This document provides introductory notes on Brownian motion and stochastic calculus. It begins by introducing Brownian motion using a random walk representation, describing its key properties including continuous paths, finite variation, and infinite path length. It then presents the continuous time representation using Wiener processes. The document will cover applications of Brownian motion including target zone exchange rates and option pricing models.

Uploaded by

dlam5678
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views19 pages

Brownian Motion and Stochastic Calculus Introductory Notes: Roberto Rigobon MIT Fall 2009

This document provides introductory notes on Brownian motion and stochastic calculus. It begins by introducing Brownian motion using a random walk representation, describing its key properties including continuous paths, finite variation, and infinite path length. It then presents the continuous time representation using Wiener processes. The document will cover applications of Brownian motion including target zone exchange rates and option pricing models.

Uploaded by

dlam5678
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Brownian Motion and Stochastic Calculus

Introductory Notes
Roberto Rigobon
MIT
Fall 2009

Contents
1 Basic Continuous Time 1
1.1 Brownian Motion: Random Walk representation. . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Some approximations (from the Random Walk) . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Brownian Motion: Continuous Time Representation. . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Itô’s lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Bellman Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Constraints and Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Absorbing Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 Reflecting Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Reseting Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.4 Shifting Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Distributions and paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Control problem: defining optimal barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Applications 12
2.1 Target Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1 The Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Cochrane-Longstaff-Santa Clara . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1 Evolution of the Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Solving for Stock Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1 Basic Continuous Time


Continuous time is an extremely useful tool. Why? well, this is like approximating functions to solve difficult
problems but when the approximation is exact. In other words, every time you log linearize a macro model
around a deterministic steady state — for example — you are making an approximation that rarely is useful
in practice. Rarely shocks are small enough for the approximation to make sense, and the deterministic
steady state is unlikely to reflect the economy’s steady state, and therefore it is an uninteresting point to
start with. When you use continuous time the approximation you make is an exact solution - and therefore

1
it can be used to analyze real situations. Furthermore, because it is an approximation you can solve models
that otherwise are intractable.
We first start introducing the random walk representation of brownian motion and derive the most
important properties. Then we provide the Wiener representation and derive everything there as well.
Finally, we use brownian motion to study the behavior of the exchange rate in a target zone exchange rate
regime.

1.1 Brownian Motion: Random Walk representation.

Brownian motion is the only continuous process with independent gaussian increments. Intuitive, no? Not
exactly... Because this definition is not extremely useful as stated, we use representations to develop our
intuition. The random walk is by far the most used one.
Assume we define the following stochastic process:

∆h w/p p
xt+∆t − xt =
−∆h w/p 1 − p

and assume that the process satisfy the following properties:

E (xt+∆t − xt ) = µ∆t
V ar (xt+∆t − xt ) = σ 2 ∆t

Given these moments we can compute ∆h and p.

∆h (2p − 1) = µ∆t
2 2 2
∆h − µ ∆t = σ 2 ∆t

If ∆t is small such that ∆t2 << ∆t, then this system of equations can be easily solved:
 √ √ 
 σ ∆t w/p 1 + σµ ∆t
1
2
xt+∆t − xt = √  √ 
 −σ ∆t w/p 12 1 − σµ ∆t

When we take the limit ∆t → 0, this is the random walk representation of a Brownian motion with drift
µ and variance σ 2 . For the record, most books define the random these quantities the way we did. The jump
up and down is called ∆h while the probability going up is p and the probability of going down is q.
We will take the limit when the steps go to zero and define this process as Brownian Motion. Indeed,
Brownian Motion has some particular and very special properties.

1.1.1 Properties

1. First, the path is continuous at every point, this means that there are no jumps. However, the path
is everywhere non-differentiable. This is easy to prove because the limit lim∆t→0 (xt+∆t − xt ) is zero,
and it is bounded. But the derivative in an unit interval goes to infinity. Assume a positive jump, then

2
   √ 
−xt
the derivative is lim∆t→0 xt+∆t∆t = lim∆t→0 σ ∆t∆t → ∞. The exact same occurs if we compute
the derivative when negative shocks occur.

2. Second, the process has finite variation, thus the integral ∆x2t has meaning. In fact, this is a very
R

important property and, although it is hard to prove for the continuous time representation, it is easy
to prove for the random walk one. Lets compute the square of ∆xt+∆t : ∆x2t+∆t = σ 2 ∆t. This is true
for any of the two possible paths. Hence, even though the process of increments is a random variable,
the process of the square of the increments is a deterministic variable. We can add all the increments
from time t1 to time t2 , and take the limit when ∆t goes to zero. In the end, the sum is
Z t2 t2
X
∆x2t = lim ∆x2t+∆t
t1 ∆t→0
t=t1
t2 − t1
= lim σ 2 ∆t ·
∆t→0 ∆t
= σ 2 (t2 − t1 )

which is finite and well defined if the time interval is well defined.
R
3. Third, the length of all paths is infinite: in other words, |∆xt | →√∞. As in the previous case, in
everyone of the paths |∆xt | is not an stochastic variable. |∆xt | = σ ∆t. Lets compute the integral
(or sumation) as before
Z t2 t2
X
|∆xt | = lim |∆xt |
t1 ∆t→0
t=t1
√ t2 − t1
= lim σ ∆t ·
∆t→0
 ∆t

t2 − t1
= lim σ √ →∞
∆t→0 ∆t

4. Fourth, the increments are independent (this should be trivial from the definition) of the random walk.
5. Fifth, the random variable xs −xt is normally distributed with mean µ(s−t) and variance σ 2 (s−t). This
is easy to prove by using the central limit theorem and the fact that the increments are independent.
In reality, for a given ∆t there are n steps, where n is given by s−t
∆t . Because at each node we have a
Bernoulli process with increments ∆h, then the distribution of n Bernoullis is a Binomial distribution.
When ∆t goes to zero the Binomial distribution converges to a normal distribution. If you want to get
familiar with this stuff, do the proofs and get used to the representation. They shouldn’t be to hard.
Do them between innings of tonight’s game.

1.1.2 Some approximations (from the Random Walk)

This is one of the most important results we have in continuous time. From the random walk representation
we can derive the lemma as an approximation. What is really important in the end is that this approximation
is actually exact when we are dealing with a continuous time process. But in order to develop the intuition
it is always usefull to get this approximation first.
Imagine we have a function of variable that follows a Brownian Motion. Lets aproximate the expected

3
value of the function by using the Taylor expansion.
1 2
EF (xt+∆t ) = F (xt ) + F 0 (xt ) E (xt+∆t − xt ) + F 00 (xt ) E (xt+∆t − xt )
2
√ 1 µ√  √ 1 µ√ 
 
= F (xt ) + F 0 (xt ) σ ∆t 1+ ∆t − σ ∆t 1− ∆t
2 σ 2 σ
√ √
 
1 1  µ  1  µ 
+ F 00 (xt ) σ 2 ∆t 1+ ∆t + σ 2 ∆t 1− ∆t
2 2 σ 2 σ
 µ  1
= F (xt ) + F 0 (xt ) σ ∆t + F 00 (xt ) σ 2 ∆t

σ 2
0 1 2 00
= F (xt ) + µF (xt ) ∆t + σ F (xt ) ∆t
2

Notice that the first order term and the second order term are both proportional to ∆t.

1.2 Brownian Motion: Continuous Time Representation.

There is a simpler representation (at least notationally) of Brownian motion First, lets define the Weiner
measure (dw) as the Brownian motion with zero drift and unit variance (variance of dt). Because increments
are normally distributed (gaussian) then we can represent any Brownian motion as a linear combination
between a Weiner measure and a predictable drift. In other words,

dxt = µdt + σdwt (1)

This is the representation that we will use.


Notice the following properties of the Weiner process: Edwt = 0 and Edwt2 = dt1 . This means that
the expected value of xt is: Edxt = µdt + σEdwt = µdt. While the variance will be given by: V dxt =
E(dxt − µdt) = σ 2 Edwt2 = σ 2 dt.
An alternative way of defining it is by starting the definition with the Wiener process:

∞ √
Definition 1 Brownian Motion: {wt }0 wt ˜N (0, 1). Define ∆xt = µ∆t + σ ∆twt where t ∈ [0, T ] and
where n · ∆t = T and n → ∞.

This is the more formal definition (however, at least to me) a little bit more obscure.
We are not going to prove normality – which is proved using the central limit theorem - nor the properties
we have already proved in the random walk representation. The proves are a little bit harder but doing them
does not provide additional intuitions. We will, however, derive Itô’s lemma.

1.2.1 Itô’s lemma

The Itô’s lemma is the result of a Taylor expansion and the properties of Brownian Motion. Assume we have
a function of time and xt (F (xt , t)). The change in the function is given by,

1
Ftt dt2 + Fxx dx2 + Fxt dxdt

dF = Ft dt + Fx dx +
2
1 Well, this property is far stronger than this. Not only Edw 2 = dt, but dw 2 = dt. ASs we did before, using the random

walk representsation you will be able to prove it.

4
substituting and taking the limit when dt goes to zero,

1
dF = Ft dt + Fx dx + Fxx σ 2 dt (2)
2

Note that we can separate the deterministic from the stochastic part,
 
1 2
dF = Ft + Fx µ + Fxx σ dt + Fx σdwt
2

In other words, this is like a normal derivative but where we have to add an additional term that comes
from the finite variation of the Brownian motion. As before, the first and second order terms are of the same
order of magnitud.

1.2.2 Bellman Equation

In almost all the problems we encounter we will end up deriving a Bellman Equation to describe the problem.
I would like to derive the equation for the general case.
Assume that an agent has a state xt that evolves following a Brownian Motion parameters for drift and
variance B (µ, σ) . Assume that agents derive an instantaneous utility u(xt ) every dt, and that the discount
rate is given by β. Assume that the agent does not make a choice, there is no maximization at all. For the
moment we would like to know what is the present value of the utility derived by the agent.
Some problems do look like this, where the value function is just evolving with the instantaneous utility.
For example, all the problems of irreversible investment or sticky prices.
The discrete representation of the Bellman Equation is the following

V (xt ) = {u (xt ) dt + (1 − βdt) E [V (xt+dt )]}

which is the same as the typical representation but where we are clear and specific about the flows per unit
of time. For example, we are assuming that between t and t + dt the random variable takes the value xt .
Therefore the utility is u (xt ) dt. Furthermore, the discount rate is β for an instantaneous period of time,
which means that the discount rate in the interval is βdt.

Stationary problem: For simplicity, we assume stationarity of the problem so the value function does
not depend on time. We do this especification first, and then relax it at the end of this subsection. The first
step is to compute the expectation on the right hand side. For this we use Itô’s lemma. we can approximate
the function V () as follows
1 2
V (xt+dt ) = V (xt ) + V 0 (xt ) dxt + V 00 (xt ) [dxt ]
2
given the definition of the process we know that
1
V (xt+dt ) = V (xt ) + V 0 (xt ) [µdt + σdwt ] + V 00 (xt ) σ 2 dt .
2
Taking expectations
1
EV (xt+dt ) = V (xt ) + V 0 (xt ) µdt + V 00 (xt ) σ 2 dt .
2

5
Substituting in the Bellman Equation we obtain
  
0 1 00 2
V (xt ) = u (xt ) dt + (1 − βdt) V (xt ) + V (xt ) µdt + V (xt ) σ dt
2
 1
= u (xt ) dt + V (xt ) (1 − βdt) + V 0 (xt ) µdt − µβdt2 + V 00 (xt ) σ 2 dt − σ 2 βdt2 .

2
We know that dt2 << dt, then
1
V (xt ) βdt = u (xt ) dt + V 0 (xt ) µdt + V 00 (xt ) σ 2 dt
 2 
0 1 00 2
V (xt ) βdt = u (xt ) + V (xt ) µ + V (xt ) σ dt
2

Notice that the left and right hand sides are proportional to dt. Therefore,
1
βV (xt ) = u (xt ) + µV 0 (xt ) + σ 2 V 00 (xt ) (3)
2
Which is an ordinary differential equation. Given the utility function we can solve for the value function.

Non-Stationary Problem: A lot of problems that we encounter are not stationary. For example, if time
is finite. In this case, the value function changes through time and it is impossible to argue that the same
function is valid every intance. Another example is an european option that has a strike price at some given
time. In this case, we have to formally take into account the fact that the value function depends on time
— as well as on the state variable x.
The Bellman equation is identical except that the approximation of the function is different. The expec-
tation on the right hand side is given by
1 2
V (xt+dt , t)= V (xt , t) + Vx (xt , t) dxt + Vt (xt , t) dt + Vxx (xt , t) [dxt ]
2
1
EV (xt+dt , t) = V (xt , t) + µVx (xt , t) dt + Vt (xt , t) dt + σ 2 Vxx (xt , t) dt
2
so, the Bellman Equation is
1
βV (xt , t) = u (xt ) + µVx (xt , t) + σ 2 Vxx (xt , t) + Vt (xt , t) (4)
2
which is a partial differential equation — and in fact it is a parabolic differential equation. For some
simple processes and simple utility functions, these equations have simple solutions. However, in most cases
these equations have to be solved numerically. There is a technioque called MultiGrid that is fantastic to
numerically solve partial differential equations.

What makes Brownian motion so special? Brownian Motion is a very special process because when
we take the limit when ∆t goes to zero, the jumps and the probabilities converge at a very particular speed;
and it is that speed that makes the mean and the variance of the process to be proportional to the time
elapsed.
For example, imagine we were doing this with a standard AR process. Something like xt+∆t − xt =
(µ + σεt ) · ∆t, where εt is a standard normal distribution. In this case, the first order term would have been
proportional to ∆t, while the second order term would have been proportional to ∆t2 . In other words, the
expected increment is µ∆t, while the variance is σ 2 ∆t2 . Which means that when the limit is taken, the

6
second order term dissapears at a faster speed; making it irrelevant. The approximation of the function will
only depend on the first order term
In the Brownian Motion case, this does not happen. The first and second order terms are of the same
order of magnitude. Interestingly, when we solve the discrete time problems, the second order terms are as
relevant as the first order ones for some of the choices. Hence, the limit of the Brownian Motion, where the
second order term survives is a much better representation of the economic problem at hand than the one
derived from the AR example discussed above. There is an additional advantage of BM! Because this is a
limit, this approximation is exact, and not a linealization.
There are other important equations that can be derived from the random walk representation: For
instance,continuous time boundary conditions and constraints, and the Kolmogorov forward and backward
equations for ergodic distributions. This is what we do next.

1.3 Constraints and Barriers

Every problem we will encounter has a boundary restriction or a constraint. Especifying these constraints
requires some understanding how brownian motion actually works. In this section we see some of the most
common constraints and derive a “methodology” to deal with them.
We study four type of restrictions: absorbing, reflecting, resetting, and shifting. In each example I will
try to provide some intuition of what they are supposed to represent. We are going to derive all these
formulas for the stationary case. However, the extension to the non-stationary one should be trivial.
Because we are going to use it frequently, let us remember the Bellman Equation:
1
βV (xt ) = u (xt ) + µV 0 (xt ) + σ 2 V 00 (xt )
2

1.3.1 Absorbing Barriers

An absorbing barrier is the instance when the value function takes a particular value when a certain state
is reached. For example, assume that we know that when the state reaches the value x̄ the instantaneous
payoff is ū permanently, and that the state will remain at x̄ with probability one.
So, this is like a final condition. When the economy reaches certain state it is trapped (absorbed ). What
is the stochastic process of x?
When x is not at the absorbing state
µ√ 

∆h w/p p 1
xt+∆t = xt + where p = 1+ ∆t
−∆h w/p 1 − p 2 σ
when the economy reaches the absorbing state (xt = x̄)
xt+∆t = xt with probability 1

What is the Bellman equation when we are at x̄? This is equivalent to ask what is the Bellman equation
when both µ = σ = 0. This is the case because at x̄ the process has drift and variance exactly equal to zero.
βV (x̄) = u (x̄)
So, the boundary condition for the differential equation is

V (x̄) =
β

7
This case is a very simple case in which we know what is the value of the function at a particular state.
In finance those restrictions are common. For example, in the case of an option we know exactly what is the
value of the payoffs at the terminal time. And therefore, we can describe V (x, T ) perfectly. We use those
restrictions to solve for the differential equation.
In macro and international, however, absorbing barriers are not that common: First, it can be a case
in which priors reach zero or one — which in continuous time we know they will never happen; Second, it
occurs when some types dissapear or dominate the whole world — so it is a case of entry and exit; Third,
it apperas when one country is infinitively larger than the other; etc. Some of these are rare circumstances
but good contraints to impose if we know something about the solution of the problem in these extreme
conditions. In fact, excluding for the entry/exit payoff, the other two are always extreme circumstances. In
macro, the most common restrictions are reflecting and resetting barriers. These are the two net examples.

1.3.2 Reflecting Barriers

Assume that we know that when a certain state is reached the stochastic process cannot continue. For
example, assume that there is an upper bound on the stochastic process and assume we know that when
that state is reached we know that we will need to either remain there or move down. Similarly, we can have
a bottom state below which we know the economy cannot fall further.
This is the example of a credible Target Zone. Imagine the central commits that if the exchange rate
reaches some level they will start selling to force the price down, but if the price is below some level they
will buy. If we believe that the central bank will do so, and it is capable of doing so, then when the state of
the economy reaches that state of the world the stochastic process can only move in one direction.
Assume that an upper reflecting barrier is at x̄. Then, when we are close to the barrier the evolution of
the process is as follows:
In equations, the standard process is
µ√ 

∆h w/p p 1
xt+∆t = xt + where p = 1+ ∆t ,
−∆h w/p 1 − p 2 σ
while at the reflecting barrier the process is (xt = x̄)

µ√ 

0 w/p p 1
xt+∆t = xt + where p = 1+ ∆t .
−∆h w/p 1 − p 2 σ

Let us derive the evolution of the value function when it is at the barrier. In discrete time we have

V (x̄) = {u (x̄) ∆t + (1 − β∆t) E [V (xt+∆t )]}

where the approximation of the function is from itô’s lemma:

EV (xt+∆t ) = V (x̄) + pV 0 (x̄) · (x̄ − x̄) + (1 − p) V 0 (x̄) · (x̄ − ∆h − x̄)


1 1
+ pV 00 (x̄) · (x̄ − x̄)2 + (1 − p) V 00 (x̄) · (x̄ − ∆h − x̄)2 .
2 2
Notice that because we are on the reflecting barrier the ”positive” jumps truly remain in the same place.
We can substitute for ∆h and p.
1 µ√   √ 
EV (xt+∆t ) = V (x̄) + V 0 (x̄) 1− ∆t −σ ∆t
2 σ
1 1  µ√   √ 2
+ V 00 (x̄) · 1− ∆t −σ ∆t
2 2 σ

8
Substituting in the Bellman equation
1 µ√   √ 
V (x̄) = u (x̄) ∆t + (1 − β∆t) V (x̄) + (1 − β∆t) V 0 (x̄) 1− ∆t −σ ∆t
2 σ
1 1  µ√   √  2
+ (1 − β∆t) V 00 (x̄) · 1− ∆t −σ ∆t
2 2 σ
Eliminating V (x̄) from both sides and rearranging terms we have
1 µ√   √ 
β∆tV (x̄) u (x̄) ∆t + V 0 (x̄) (1 − β∆t)
= 1− ∆t −σ ∆t
2 σ
1  µ√ 
+V 00 (x̄) (1 − β∆t) 1 − ∆t σ 2 ∆t
4 σ

Notice that the the second term on the right is proportional to ∆t. While the third one is proportional to
∆t. By eliminating all the terms of higher order of ∆t we simplify to
√ 1
βV (x̄) ∆t = u (x̄) ∆t − V 0 (x̄) σ ∆t + V 00 (x̄) σ 2 ∆t,
4

and because ∆t >> ∆t, we have that this constraint simplifies to

V 0 (x̄) = 0 (5)

So, in any reflecting barrier, the value function approaches it with a zero slope. By the way, this is the
case given the form of the reflection. There are other mechanisms on how the reflection takes place, but
here, this is the simplest one and produces an incredibly simple constraint.
This is exactly the constrain we will use in the Target Zone example.

1.3.3 Reseting Barriers

Resetting barriers are also extremely important and used all ovber the place. In general there are two types
of resetting barriers, the ones that have fixed costs, and the ones that have proportional costs.
For example, a case of fixed costs are all irreversible investment, sticky prices, inventory problems. The
idea is that when the economy reaches certain state (x̄) the agent recieves a flow F (positive or negative)
and the state jumps to (x̂). In the irreversible investment or inventory model, when the capital is too low
(or inventory is too low) we pay a fixed cost F and we reinvest (change the capital stock) or purchase goods
(change the inventory).
Lets see what happens to the value function. Remember that when the economy is at x̄ it receives the
fixed flow and the instantaneous utility function, and then jumps to x̂ independently of the realization of
the state variable.

V (x̄) = u (x̄) ∆t + (1 − β∆t) [V (x̂) + F ]


V (x̄) = V (x̂) + F (6)

In some “circles”, this constraint is called the value matching conrtaint. When you stare at it, it does
make a lot of sense. When the economy reaches x̄ authomatically jumps to x̂ and gets a flow F . Imagine the
flow F where zero, then the two bvalue functions should ahve the exact same value — and they do according
to this constraint. In the case of menu costs, or irreversible investment, or inventory reposition, the flow is
alwasy negative.

9
There is another type of fixed cost that is proportional to the movement. In other words, the cost is
proportional to (x̂ − x̄). In this case, the value function is

V (x̄) = u (x̄) ∆t + (1 − β∆t) [V (x̂) + f · (x̂ − x̄)]

which implies
V (x̄) − V (x̂)
= −f.
x̄ − x̂
where f is the proportional cost.
Of course there are combinations of these two types of constraints. But the constraints in the end are
not much more complicated than what we have derived.

1.3.4 Shifting Barriers

Finally, an extremely interesting set of constraints is what happens when in a state of the world the profit
flow changes. In other words, assume that in some region, for all x ≤ x̄ there is an instantaneous utility
function (or profit function) given by ul , while for the rest of the space the utility is uh .
This is a very common constraint when we are dealing with problems such as different regimes of com-
petition, or regions of credit constraints, etc. In general, we can think of different competitive arrangements
where in one side of the state space the agents are playing one game, and a different game on the other.
Clearly, there are two value functions and Bellman equations, one for each regime
1
βVl (xt ) = ul (xt ) + µVl0 (xt ) + σ 2 Vl00 (xt )
2
1
βVh (xt ) = uh (xt ) + µVh0 (xt ) + σ 2 Vh00 (xt )
2
where the constraint implies the following Bellman equation at x̄ (in discrete time)

Vl (x̄) = ul (x̄) ∆t + (1 − β∆t) [pVh (x̄ + ∆h) + (1 − p) Vl (x̄ − ∆h)]

where the assumption is that at the barrier, if the state jumps up we shift to the other regime, but if it drops
we remain in the low regime. The approximation of the function implies
1
Vh (x̄ + ∆h) = Vh (x̄) + Vh0 (x̄) ∆h + Vh00 (x̄) ∆h2
2
1
Vl (x̄ − ∆h) = Vl (x̄) − Vl0 (x̄) ∆h + Vl00 (x̄) ∆h2
2
substituting
 
0 1 00 2
Vl (x̄) = ul (x̄) ∆t + (1 − β∆t) p Vh (x̄) + Vh (x̄) ∆h + Vh (x̄) ∆h
2
 
0 1 00 2
+ (1 − β∆t) (1 − p) Vl (x̄) − Vl (x̄) ∆h + Vl (x̄) ∆h
2

substituting the probabilities, and eliminating all the high order terms we end up with
1 µ√  1 µ√  0
Vl (x̄) = ul (x̄) ∆t + 1+ ∆t Vh (x̄) + 1+ ∆t Vh (x̄) ∆h +
2 σ 2 σ
1  µ√  1  µ√  0
+ (1 − β∆t) 1− ∆t Vl (x̄) − 1− ∆t Vl (x̄) ∆h.
2 σ 2 σ

10

Notice that the terms not multiplied by ∆t are
1 1
Vl (x̄) = Vh (x̄) + Vl (x̄)
2 2
which implies
Vl (x̄) = Vh (x̄) (7)

Interestingly, after this condition is achieved, we also have terms that are proportional to ∆t. Those
are
1 µ√ 1 1  µ√  1
0= ∆tVh (x̄) + Vh0 (x̄) ∆h + − ∆t Vl (x̄) − Vl0 (x̄) ∆h.
2σ 2 2 σ 2
which after substituting the fact that the functions have to be continuous on x̄ the first and third term cancel
eachother and we find that
Vh0 (x̄) = Vl0 (x̄) (8)

So, not only the function has to be continuous but it is also differentiable.

1.4 Distributions and paths

Another important aspect of any brownian motion problem is to understand how it evolves, and what is its
distribution. The laws of motion of these processes are described by the Kolgomorov forward and backward
equations. The idea is in general very simple: in one equation we write what is the probability that a certain
state is reach – which obviously takes into consideration the barriers, while the other equation describes
what is the distribution of the next states conditional on where the economy is.
In terms of figures, the idea is that in one equation we compute the mass of observations that arrive to
state x coming from all possible states, while the other computes where the state will end assuming we are
on x.
Lets see some examples. Assume we are interested in estimating the ergodic distriobution (assuming
one exist). Assume we have a brownian motion and there are reflecting barriers at xl and xh . Define the
probability of being on state x as φ (x). For all the x0 s between the two reflecting barriers we have that the
probability of being in state x is the probability of being in state x − ∆h and getting a positive shock, plus
the mass on state x + ∆h times the probability of a bad shock. Technically, this is
φ (x) = pφ (x − ∆h) + (1 − p) φ (x + ∆h)
From Itô’s lemma we know that
1 2
φ (x − ∆h) = φ (x) − ∆h · φ0 (x) + (∆h) · φ00 (x)
2
1 2
φ (x + ∆h) = φ (x) + ∆h · φ0 (x) + (∆h) · φ00 (x)
2
which implies that
1 2
φ (x) = φ (x) + (1 − 2p) φ0 (x) ∆h + (∆h) · φ00 (x) .
2
Sustituting for p and ∆h
 µ√  √ 1  √ 2 00
0= − ∆t σ ∆tφ0 (x) + σ ∆t · φ (x) .
σ 2
Notice that both terms are proportional to ∆t, hence,
1 2 00
σ · φ (x) − µ · φ0 (x) = 0
2

11
The other Kolmogorov equation is to compute where the state variable (or particle) moves next. In this
case, the law of motion is
φ (x) = pφ (x + ∆h) + (1 − p) φ (x − ∆h)
which is very similar to the one we had before, except that here we are saying that the mass of particles at x
moves up with probability p and down with probability 1 − p. Using the same equations as before we arrive
to
1 2 00
σ · φ (x) + µ · φ0 (x) = 0
2
The first opne is called the backward kolmogorov equation and the second one is the forward one. We
have to impose boundary conditions, and those are impossed by stating the evolution of the probability
distribution next to the barriers. So, if we are at the boundary (low one) and get a negative shock we remain
on the same boundary.
φ (xl ) = pφ (xl + ∆h) + (1 − p) φ (xl ) .
Substituting we have
 
0 1 2 00
φ (xl ) = p φ (xl ) + ∆h · φ (xl ) + (∆h) · φ (xl ) + (1 − p) φ (xl )
2
1 µ√  √ 0 1 µ √  1  √ 2 00
0 = 1+ ∆t σ ∆tφ (xl ) + 1+ ∆t σ ∆t · φ (xl )
2 σ 2 σ 2
1 √
0 = σ ∆tφ0 (xl ) + ∆t · Φ
2
0 = φ0 (xl )

The same constraint is obtained in the upper bound. So, the differential equation is solved with these
two constraints in mind. There are other constraints worth emphasizing. Such that the total sum of the
probability distribution has to add to one, etc.

1.5 Control problem: defining optimal barriers

There are optimality conditions when we are solving a control problem that must be satisfied. In general
they are very intuitive and can be derived in the same way we have derived the constraints - usually they
are a derivative of the constraints.
There are great references for brownian motion and stochastic calculus. My preferred ones are Øksendal
(2003), Harrison (1990), Dixit (1993), and Karatzas and Shreve (1988).

2 Applications
We study two intereting applications of Brownian Motion. The first one is the solution of the Target Zone
problem for exchange rates, and the second one is to find a solution to the CLSC paper – this is the two
country single good asset pricing problem we discussed earlier.

2.1 Target Zones

From the quantitative theory of money we have the following equation:

mt − pt = yt + vt − αEπ t (9)

12
Using purchasing power parity, and assuming foreign prices have no inflation and are equal to one, we
can rewrite equation 9

St = mt − yt − vt + αE Ŝt (10)

where St = pt − p∗t is the spot exchange rate, and Ŝt = π t − π ∗t is the exchange rate depreciation (which is
the result of our assumption of fixed foreign prices). Just a reminder that the inflation rate is the change in
prices, and therefore the exchange rate depreciation is just the change of the spot exchange rate.
Assume fundamentals are described by a Brownian motion process: mt − yt − vt = xt . Then the equation
10 is what is called an stochastic differential equation.
Lets see how we can solve it.

St = xt + αE Ŝt
EdSt
E Ŝt =
dt

St is a function of xt . Assume that xt is given by equation 11.

dxt = µdt + σdwt (11)

This means that we can apply Itô’s lemma and take expectations.

∂S ∂S 1 ∂2S 2
dSt = dt + dxt + dx
∂t ∂x 2 ∂x2  t
∂S ∂S 1 ∂2S 2 1 ∂S
= + µ+ 2
σ dt + σdwt
∂t ∂x 2 ∂x 2 ∂x
1 ∂2S 2
 
∂S ∂S
EdSt = + µ+ σ dt
∂t ∂x 2 ∂x2

Substituting in the differential equation we obtain:

1 ∂2S 2
 
∂S ∂S
S = xt + α + µ+ σ
∂t ∂x 2 ∂x2

which is a partial differential equation with boundary conditions that we will define later. Now, it should
be clear from now that what is left is just algebra. If you have the boundary conditions the solution to the
partial differential equation is simple.
Note that so far we have not impose the fact that the exchange rate is operating under a target zone.
This is where we get the boundary conditions. First assume that the exchange rate moves between bounds
[U, L]. This means that if the fundamentals imply an exchange rate that is between U and L then there is no
intervention. However, if the fundamentals imply a different exchange rate the central bank will intervene
to move the exchange rate toward the band. For simplicity (see Bertola and Caballero for the relaxation of
this assumption) assume that the central bank has infinite amount of reserves. Thus, there is no way they
can not commit to the band.

13
2.1.1 The Differential Equation

First, because the bands are time invariant and the process is time invariant, the functional form of S will
only depend on time through the value of fundamental. Therefore, ∂S ∂t = 0. The partial differential equation
is then a second order differential equation. So, instead of using the notation ∂S ∂x we know that the spot
exchange rate is a function of the fundamental and the first derivative is just represented as S 0 .
 
1
S = xt + α µS 0 + σ 2 S 00
2

There are two parts to the solution of this differential equation. The homogeneous and the particular
solution. It is important to highlight that if there were no target zones, then the only solution for this
differential equation is the particular solution. To find this solution just simply state that

S (xt ) = a + bxt + cx2t + ...

Substituting and equating coefficients we find that the solution is S(xt ) = xt + αµ.
The homogeneous solution takes the form S(xt ) = Ae−λxt . Substituting in the homogeneous differential
equation we find that the roots have to satisfy 1 = α µλ + 21 σ 2 λ2 . So, the general solution is


S (x) = Aeλ1 x + Beλ2 x + x + αµ

So, A and B have to be found from the boundary conditions.

2.1.2 Boundary Conditions

If there are no boundary conditions, the exchange rate function is described only by the particular solution.
Let us forget about the αµ term for a moment – which is indeed the effect of the expected depreciation
on the exchange rate. Notice that when there are no boundaries the exchange rate will fully follow the
fundamentals – or in other words, the changes of the exchange rate are perfectly correlated to the changes
of the fundamentals. If we have bands, however, we know that when we are in the lower boundary (for
example) the exchange rate can not go down any further. So, it remains either at the same place or moves
up. This is anticipated by the agents (in the expectation) and therefore it affects the level of the exchange
rate immediately.
How can we take this into account? The first step is to determine the law of motion of the fundamental
when we are close to the boundaries. As we said before, assume that at x = l the exchange rate takes a
value of L, and that at x = u the exchange rate is equal to U . The constraint on the law of motion of the
brownian motion when it is at the boundary implies that when x = l the fundamentals cannot fall further,
and that when x = u the fundamental cannot increase further. In fact, that is exactly whan the central
bank does. That is the intervention they do to keep the fundamentals within the admissible range.
This constraint on the law of motion of the fundamental implies that the stochastic process is

xt = l
µ√ 

∆h w/p p 1
xt+∆t − xt = where p = 1+ ∆t .
0 w/p 1 − p 2 σ
Let us see the implications of this process on the exchange rate. Remember that the original law of motion
of the exchange rate – before applying the Itô’s lemma is

St = xt + αE Ŝt

14
Let us now evaluate this equation at the boundary. Remember that when xt = l the exchange rate takes the
value of L. This means that
E∆S
α = S (l) − l
∆t

At l the fundamentals either increase or remain the same (this is the intervention). Given this, the change
in the exchange rate is as follows:

S (l + ∆h) − S (l) w/p p
∆St = St+∆t − St (l) = .
0 w/p 1 − p

So, the expected value is

E [St+∆t − St (l)] = [S (l + ∆h) − S (l)] · p


 
0 1 2 00
= S (l) + ∆h · S (l) + ∆h · S (l) − S (l) · p
2
1 √
 
0 1 0 1 2 00
= σ ∆tS (l) + µS (l) + σ S (l) ∆t
2 2 4

Substituting in the law of motion of the exchange rate we have

1 √
   
0 1 0 1 2 00
α σ ∆tS (l) + µS (l) + σ S (l) ∆t = ∆t [S (l) − l] .
2 2 4

Notice that when ∆t goes to zero, ∆t >> ∆t. Therefore, simplifying in the previous equation we get
1 √
ασ ∆tS 0 (l) = 0
2
S 0 (l) = 0

Therefore, the function is flat at l! The exact same thing occurs at the top bound.
There are four equations that describe the function and the impact of the intervention on the function.
These are the boundary conditions:

S(u) = U (12a)
0
S (u) = 0 (12b)
S(l) = L (12c)
S 0 (l) = 0 (12d)

The values of A and B as well as l and u satisfy the four boundary conditions (equations 12).

2.1.3 Comments

Some comments on the solution and its empirical relevance: First, the implied probability distribution in a
target zones is that most of the time the exchange rate should be close to the boundaries. In other words,
if the we depict the rage of the fundamentals in the x-axis – fluctuating from l to u – the distribution
should look like an U. However, in reality, the distribution is closer to an inverted U. Thus, the empirical
distribution implies that most of the time the exchange rate is closer to the center of the band rather than
in the extremes. The main reason why this occurs is because the central bank not only intervenes in the
bands but also in the center. In the model we have used we have the intervention only in the extremes.

15
Second, the interest rate differential should be declining. In fact, close to the upper band the exchange
rate is more likely to appreciate than depreciate thus smaller interest rates should be expected. But this
seems counter factual. We have observed that the longer and closer to the band the exchange rate is the
higher the interest rate. The reason is that in our model there is no possibility of abandonment of the
exchange rate, while this is the premium we observe in reality.
Extensions: Bertola and Caballero (1992) is for the probability of realignment. Garber and Svensson
(1994) look at the cases when there is large interventions and not necessarily in the band.

2.2 Cochrane-Longstaff-Santa Clara

The second application we study is related to the asset pricing section we covered at the beginning of the
course. Remember that for the 2 countries, 1 good case the price of the stock – in discrete time – is given
by:
In continuous time, the formula is very similar. The next subsection follows closely the paper by Cochrane,
Longstaff, and Santa-Clara (2005). The solution for the stock price in discrete or continuous time is exactly
the same. In continuous time we usually work with finite horizon – for simplicity of the regularity conditions
– but that is the only difference. The stock price at home is given by
Z T  
yH (s)
SH,t = y (t) e−ρ(s−t) Et ds
t y (s)

h i
Notice that the important problem to be solved is to describe the evolution of yy(s)
H (s)
which is the
evolution of the share of home output relative to world output. Computing the expectation of this is quite
important.
Assume a very general specification of the output processes

dyH (t)
= µH (t) dt + σ H (t) dzH (t) (13)
yH (t)
dyF (t)
= µF (t) dt + σ F (t) dzF (t)
yF (t)

where we assume that dzH (t) and dzF (t) are the standard Wiener processes with mean zero and variance
one. Additionally, assume that the two Brownian motions are uncorrelated.

2.2.1 Evolution of the Share

The first step is to find the process of the share. Assume we have the function
x
θ (x, y) =
x+y
then, the first derivatives are
y
θx = 2
(x + y)
x
θy = − 2
(x + y)

16
while the second order derivatives are
2y
θxx = − 3
(x + y)
2x
θyy = 3
(x + y)
" #
1 2y
θxy = 2 − 3
(x + y) (x + y)

We can now substitute and find the stochastic properties of the share.
1h 2 2
i
dθ = θx dx + θy dy + θxx · [dx] + θyy · [dy] + 2θxy · [dx · dy]
2
Denote the share as θ, therefore substituting we get
" #
yF yH 1 2yF 2 2yH 2
dθ = 2 dyH − 2 dyF + 2 − 3 [dyH ] + 3 [dyF ]
(yH + yF ) (yH + yF ) (yH + yF ) (yH + yF )

where we have already used the fact that the Brownian motions are independent. Meaning that [dyH · dyF ] =
0. We know that θ = yHy+yH
F
and that 1 − θ = yHy+y
F
F
Therefore the equation describing the evolution of the
share car be written as:
 2  2
dyH dyF dyH 2 dyF
dθ = θ (1 − θ) − θ (1 − θ) − θ2 (1 − θ) + θ (1 − θ)
yH yF yH yF

I have eliminated the (t) terms from the notation for simplicity. This can be further reduced to
  "  2  2 #
dyH dyF dyF dyH
dθ = θ (1 − θ) − + θ (1 − θ) (1 − θ) −θ
yH yF yF yH

This is a difficult process to describe for the general Brownian motions described in equation (13). In
order to solve this even further, assume that µH (t) = µF (t) = µ, and that σ H (t) = σ F (t) = σ. The
evolution of θ.
2
dθ = θ (1 − θ) (µdt + σdzH ) − θ (1 − θ) (µdt + σdzF ) − θ2 (1 − θ) σ 2 dt + θ (1 − θ) σ 2 dt
 

= θ (1 − θ) σ (dzH − dzF ) + θ (1 − θ) [(1 − θ) dt − θdt] σ 2


= θ (1 − θ) σ (dzH − dzF ) + θ (1 − θ) (1 − 2θ) σ 2 dt

Notice that there is a drift


E [dθt ] = θ (1 − θ) (1 − 2θ) σ 2
which is zero at θ = 0, θ = 1, and θ = 1/2. Also, it is easy to confirm that the drift is positive when
θ ∈ (0, 1/2) and negative when θ ∈ (1/2, 1). In other words, the share tends to move to the middle, and
there are two absorbing states: 0 and 1. The variance of this process is given by
2 2
[dθt ] = θ2 (1 − θ) σ 2 · 2dt

17
2.2.2 Solving for Stock Prices

Importantly, the presence of a non zero drift implies that the share is not a martingale, however, it only
depends on the value of θ today – which means that it is a random walk.
The price of the stock is given by
Z T  
−ρ(s−t) yH (s)
SH (t) = y (t) e Et ds
t y (s)
Z T
= y (t) e−ρ(s−t) Et θs ds
t

this can be re-written as


SH (t)
= Et F (θt )
y (t)
Z T
F (θt ) = e−ρ(s−t) θs ds
t

As before, we can use Itô’s lemma to compute this integral (and its expectation). F (·) is a function of θ and
therefore for a given process of the share we can compute the evolution of the function by Itô’s lemma.
1 2
dF (θt ) = F 0 (θt ) dθt + F 00 (θt ) [dθt ]
2
Hence,
1 2
Et dF (θt ) = F 0 (θt ) Et [dθt ] + F 00 (θt ) [dθt ]
2
From the previous derivation of the process of θ

Et [dθt ] = θ (1 − θ) (1 − 2θ) σ 2 · dt
2 2
[dθt ] = θ2 (1 − θ) σ 2 · 2dt

substituting h h ii
2
Et dF (θt ) = F 0 (θt ) [θ (1 − θ) (1 − 2θ)] + F 00 (θt ) θ2 (1 − θ) σ 2 dt (14)

To finalize the differential equation describing the function we have to concentrate on the term on the
left. We have defined the function F as
Z T
F (θt ) = e−ρ(s−t) θs ds
t

which means that F is a stochastic integral (mainly because it is an integral of a stochastic variable). This
function, interestingly, depends on time through two channels: the integrand and the limit of the integration.
So, we can compute the change in the function with respect to t.
"Z #
T
dF (θt ) = ρ e−ρ(s−t) θs ds dt − e−ρ(t−t) θt dt
t

= ρF (θt ) dt − θt dt

where the first term of the derivative is from the integrand, and the second one is from the limit in the
integration. Now, we do not need the whole process of F (·) but its expectation.

Et dF (θt ) = ρF (θt ) dt − θt dt

18
The term of the left is what we computed above in equation (14). Substituting implies the following differ-
ential equation
h i
2
F 0 (θt ) [θt (1 − θt ) (1 − 2θt )] σ 2 + F 00 (θt ) θ2t (1 − θt ) σ 2 = ρF (θt ) − θt

So, we have to solve the following differential equation to find the solution to the function F and the
value of the stock.
2
θ2 (1 − θ) σ 2 · F 00 + θ (1 − θ) (1 − 2θ) σ 2 · F 0 − ρF = θ
This is usually solved using some numerical methods. There is one exact solution to this differential equation
that uses hyperbolic functions – it can be solved with Lagrange or Fourier transforms – but this exact solution
only exists for this very simple example where the drifts of the processes and the variances are equal and
constant. In the general case, there is no simple solution and the easiest way to solve this differential equation
is by appealing to the several numerical methods available in the literature. My preferred method is called
Mutigrid. See Briggs, Henson, and McCormick (2000) for further references.
Notice that once F is known as a function of the share, then the stock price of home is known using

SH (t) = y (t) F (θt )

while the foreign stock is


SF (t) = y (t) F (1 − θt )
That solves the problem of all the stock prices.

References
Bertola, G., and R. J. Caballero (1992): “Target Zones and Realignments,” American Economic
Review, 82, 520–536.
Briggs, W. L., V. E. Henson, and S. F. McCormick (2000): A Multigrid Tutorial. SIAM: Society for
Industrial and Applied Mathematics, second edition edn.
Cochrane, J. H., F. A. Longstaff, and P. Santa-Clara (2005): “Two Trees,” working paper, Uni-
versity of Chicago.
Dixit, A. (1993): The Art of Smooth Pasting. Harwood Academic Publisheers.

Garber, P. M., and L. E. Svensson (1994): “The Operation and Collapse of Fixed Exchange Rate
Regimes,” in Handbook of International Economics, ed. by G. Grossman, and K. Rogoff, vol. 3, chap. 36,
pp. 1865–1911. Elsevier, 1 edn.
Harrison, M. (1990): Brownian motion and Stochastic Flow Systems. Krieger Publishing Company.

Karatzas, I., and S. E. Shreve (1988): Brownian Motion and Stochastic Calculus. Springer-Verlag, New
York.
Øksendal, B. (2003): Stochastic Differential Equations. 6th edition, Springer-Verlag, Berlin.

19

You might also like