0% found this document useful (0 votes)
5 views6 pages

Homework_Week3_solutions

The document contains exercises on Markov chains, including the computation of transition matrices, probabilities of specific trajectories, and analysis of various stochastic processes. It covers scenarios involving ants moving between points, a cat hunting a rat, and mice navigating a maze, providing solutions and methodologies for each exercise. Key concepts such as irreducibility, hitting times, and first-step analysis are discussed throughout the exercises.

Uploaded by

The Aquaman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views6 pages

Homework_Week3_solutions

The document contains exercises on Markov chains, including the computation of transition matrices, probabilities of specific trajectories, and analysis of various stochastic processes. It covers scenarios involving ants moving between points, a cat hunting a rat, and mice navigating a maze, providing solutions and methodologies for each exercise. Key concepts such as irreducibility, hitting times, and first-step analysis are discussed throughout the exercises.

Uploaded by

The Aquaman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Exercise 1: Consider four points, 1, 2, 3 e 4, on a surface.

An ant moves at random (with


equal probabilities) between points 1, 2, 3 and 4. The starting point is also chosen uniformly.
Compute the transition matrix. Let Xi ,i = 0, 1, . . ., be the position of the ant at time n.
Which is the probability of the trajectory X0 = 1, X1 = 2, X2 = 3, X3 = 4, X4 = 1?
Solution:
The starting point is chosen uniformly in {1, 2, 3, 4}. This implies ν = (1/4, 1/4, 1/4, 1/4)
is the initial distribution of the chain. The ant choses at random the next point, thus the
transition matrix is:  
1/4 1/4 1/4 1/4
 1/4 1/4 1/4 1/4 
P=  1/4 1/4 1/4 1/4  .

1/4 1/4 1/4 1/4


With these information we compute:
 5
1
P(X0 = 1, X1 = 2, X2 = 3, X3 = 4, X4 = 1) = ν(1)p12 p23 p34 p41 = .
4

Exercise 2: Consider an HMC (Xn )n≥0 with values in I = {0, 1, 2} and transition matrix
 
0.2 0.5 0.3
P=  0.1 0.1 0.8  . (1)
0.5 0.2 0.3

Draw the transition graph and compute P(X3 = 1|X0 = 1) and P(X7 = 2|X4 = 0).
Solution: Since M does not have null entries, the transition graph is the complete graph.
That is ogni any pairs of states are connected by an edge of the graph. In other words, it is
possible to reach any state, from any starting point, in one step.
Moreover, we have
X3
P(X3 = 1|X0 = 1) = p1i pij pj1 = 0.307;
i,j=0

3
X
P(X7 = 2|X4 = 0) = P(X3 = 2|X0 = 0) = p0i pij pj2 = 0.405.
i,j=0

Exercise 3: Consider N numbered balls. Some of them are contained in urn A and the
others in urn B. Let Xn be the cardinality of the balls in A at time n. The cardinality of the
balls in A at time n + 1 (i.e. Xn+1 ) is decided as follows. Choose at random a ball out the
N balls. Put the chosen ball in A with probability p and in B with probability 1 − p. Prove
(Xn )n≥0 is an HMC and compute the transition matrix of the process.
Solution: The dynamics of Xn may be described as follows

Xn+1 = Xn + Zn+1 , for n ≥ 0


where Zn+1 ∈ {−1, 0, 1} with
i
P(Zn+1 = −1|Xn = i, Xn−1 = . . .) = P(Zn+1 = −1|Xn = i) = (1 − p);
N
i N −i
P(Zn+1 = 0|Xn = i, Xn−1 = . . .) = P(Zn+1 = 0|Xn = i) = p + (1 − p);
N N
N −i
P(Zn+1 = +1|Xn = i, Xn−1 = . . .) = P(Zn+1 = +1|Xn = i) = p.
N
It follows from Bremaud,Theorem 2.2, that the previous positions define an HMC.
To compute the transition matrix P = (pij )i,j∈{0,...,N } , notice that pij = 0 if j ̸= i, i − 1 and
i + 1. When j = i, i − 1 or i + 1 we have

pii = P(Xn+1 = i|Xn = i)


= P(Xn + Zn+1 = i|Xn = i)
i N −i
= P(Zn+1 = 0|Xn = i) = p + (1 − p);
N N

N −i
pii+1 = P(Zn+1 = +1|Xn = i) = p;
N

i
pii−1 = P(Zn+1 = −1|Xn = i) = p.
N

Exercise 4: Let (Zn )n≥1 be a sequence of i.i.d. geometric r.v. with parameter p ∈ (0, 1):
that is, for k ≥ 0 we have P(Zn = k) = (1 − p)k p. Let us consider the stochastic pro-
cess (Xn )n≥0 , where X0 ∈ N is a r.v. independent of the sequence (Zn )n≥1 and define
Xn = max{X0 , Z1 , . . . , Zn } for n ≥ 1. Show that (Xn )n≥0 is a Markov chain and compute its
transition matrix.
Solution: To prove that (Xn )n≥0 ia an HMC it suffices to show that for n ≥ 1

Xn = max{X0 , Z1 , . . . , Zn } = max{max{X0 , Z1 , . . . , Zn−1 }, Zn } = max{Xn−1 , Zn }

and then use Bremaud, Theorem 2.1.


To compute the transition matrix P = (pij )i,j∈N notice that pii and pi i+k with k ≥ 1 are the
solely non-null entries. In particular, we have
i
X
pii = P(Zn+1 ≤ i|Xn = i) = P(Zn+1 ≤ i) = (1 − p)k p;
k=0

pii+k = P(Zn+1 = i + k|Xn = i) = P(Zn+1 = i + k) = (1 − p)i+k p.


(2)

Exercise 5: A mice moves in a maze. The rooms of the maze are numbered from. 1 to 5.
In 5 there is the cheese whereas in 3 there is cat. The cat is lazy and remains all the time in
room 3. However, the mice enters the room inhabited by the cat, the cat will eat the mice.
The position of the mice at time n ≥ 0 is a Markov chain with transition matrix:
 
0 1/2 0 1/2 0
 1/2 0 1/2 0 0 
 
P= 0 0 1 0 0 . (3)
 1/3 0 1/3 0 1/3 
0 0 0 0 1
1. Draw the transition graph associated to P .

2. Is P irreducible?

3. Compute the probability that the mice, starting from room 1, get to the cheese without
being eaten by the the cat .

Solution: The following is the transition diagram


1/3 1/3

1 5 4 3 1

1/2 1/3 1/2


1/2

1 2

1/2

The chain is NOT irreducible since we have 3 communicationg classes {1, 2, 4}, {3} and {5}.
Define ui = Pi (T5 < T3 ) where Ti = min{n >= 0 : Xn = i} is the first hitting time of state
i ∈ {1, 2, 3, 4, 5}. We have that


 2u1 = u2 + u4
 2u2 = u1 + u3


u3 = 0
3u4 = u1 + u3 + u5




u5 = 1

The solution is U T = (2/7, 1/7, 0, 3/7, 1). Therefore P1 (T5 < T3 ) = u1 = 2/7.
Exercise 6: A cat is hunting a rat. Rat and cat moves between two rooms using different
paths. Their motion are independent and described by the transition matrices
   
0.7 0.3 0.4 0.6
Pcat = Prat = . (4)
0.3 0.7 0.6 0.4

If the cat and the rat are ever in the same room the cat will eat the rat and the hunting
comes to an end.

1. Describe the time evolution of the hunting as a Markov chain with one of the state
representing the end of the hunting (i.e. the cat and the rat are in the same room and
the cat eats the rat).

2. Assume the cat is starting from room 2 and the rat from room 1. Compute the average
duration of the hunting.
Solution:

1. Let (Xn )n≥0 be the Markov chain describing the dynamics of the hunting. We set
Xn ∈ {0, 1, 2}, n ≥ 0, where

• 0=“the cat and the rat are in the same room”;


• 1=“the cat is in room 2 and the rat in room 1”;
• 2=“the cat is in room 1 and the rat in room 2”.

Let us compute the transition matrix P = (pij )i,j∈{0,1,2} associate to the chain.

Notice that, since 0 is a closed state, it holds p00 = 1 and p01 = p02 = 0.

Then, let A be the event “the rat remains in room 1 whereas the cat moves from room
2 to room 1” and B the event “the cat remains in room 2 whereas the rat moves from
room 1 to room 2”. Since the cat is moving independently of the rat, we have

p10 = P(A) + P(B) = 0.4 × 0.3 + 0.7 × 0.6.

In the same way we can compute p11 = 0.4 × 0.7 and p12 = 0.3 × 0.6 which are respec-
tively the probability that the rat remains in room 1 and the cat in room 2 and the
probability that the cat moves from room 2 to room 1 and the same the rat moves from
room 1 to room 2.

It follows from the same computations that p20 = 0.4 × 0.3 + 0.7 × 0.6, p21 = 0.3 × 0.6
and p22 = 0.4 × 0.7. In conclusion, we have
 
1 0 0
P=  0.4 × 0.3 + 0.7 × 0.6 0.4 × 0.7 0.3 × 0.6  . (5)
0.4 × 0.3 + 0.7 × 0.6 0.3 × 0.6 0.4 × 0.7

2. Put T = inf{n ≥ 0|Xn = 0} and E(i) = E(T |X0 = i) per i = 0, 1, 2. Using the first-step
analysis, and the boundary condition E(0) = 0, we obtain the system

E(1) = p11 E(1) + p12 E(2) + 1
, (6)
E(2) = p21 E(1) + p22 E(2) + 1

that has the solution E(1) = E(2) ≃ 1.85.

Exercise 7: Let (Xn )n≥0 be an HMC taking value in I = {1, 2, 3, 4} and transition matrix
 
0.2 0.3 0.5 0
 0 0.2 0.3 0.5 
P=
 0.5
. (7)
0 0.2 0.3 
0.3 0.5 0 0.2

Assume X0 = 1. Compute the probability the chain visits state 3 before state 4.
Solution: Put H = inf{n ≥ 0|Xn ∈ {3, 4}}. We have to compute u(1) := P(XH = 3|X0 = 1).
In general, we set u(i) := P(XH = 3|X0 = i), for i = 1, 2, 3, 4. Notice that u(3) = 1 e u(4) = 0.
Then conditioning on the first step we obtain the following system for u(1) and u(2):
u(1) = p11 u(1) + p12 u(2) + p13 ;
u(2) = p22 u(2) + p23 ;
with solution u(1) ≃ 0.766 and u(2) = 0.375.
Exercise 8: The vector of mean hitting times of the set A ⊂ I, k A = (kiA : i ∈ I), is the
minimal non-negative solution to the system of linear equations
 A
ki = 0 P for i ∈ A
A A
ki = 1 + j̸∈A pij kj for i ̸∈ A

Solution: Define the hitting time of A ⊂ I as H A = min{n ≥ 0 : Xn ∈ A}. We then have


kiA = Ei [H A ]. It is clear that if i ∈ A then H A = 0 a.s., and therefore kiA = 0.
By first-step analysis, with i ̸∈ A we have
kiA = Ei [H A ] = Ei [Ei [H A |X1 ]] = Ei [EX1 [1 + H̃ A ]]
where H̃ A min{n ≥ 0 : Xn+1 ∈ A} has the same distribution of H A . It follows that
X X
kiA = 1 + Ei [EX1 [H̃ A ]] = 1 + Ei [kX
A
1
] = 1 + kj
A
Pi (X1 = j) = 1 + kjA pij .
j∈I j̸∈A

We now check that k A is the minimal solution. Let us consider another solution {xi }i∈I with
x ≥ 0.
If i ∈ A than 0 = xi ≥ kiA . If i ̸∈ A we have
X
xi = 1 + pij1 xj1
j1 ̸∈A

and then, noticing that Pi (H A ≥ 1) = 1, we recursively get


X X
xi = Pi (H A ≥ 1) + pij1 + pij1 pj1 j2 xj2
j1 ̸∈A j1 ,j2 ̸∈A
X
= Pi (H A ≥ 1) + Pi (H A ≥ 2) + pij1 pj1 j2 xj2
j1 ,j2 ̸∈A
X
= Pi (H A ≥ 1) + Pi (H A ≥ 2) + Pi (H A ≥ 3) + pij1 pj1 j2 pj2 j3 xj3
j1 ,j2 ,j3 ̸∈A
n
X X
= Pi (H A ≥ k) + pij1 · pjn−1 jn xjn
k=1 j1 ,...,jn ̸∈A

It follows that, for any n ≥ 1,


n
X
xi ≥ Pi (H A ≥ k) .
k=1
By taking the limit for n → ∞, we finally get

X
xi ≥ Pi (H A ≥ k) = kiA
k=1
and the result holds true.
Exercise 9: Consider the gabler’s ruin problem, with p = q = 1/2. That is, I = N0 , p00 = 1
and pi,i+1 = pi,i−1 = 1/2 for i > 0. Compute ki = E0 [H 0 ], with H 0 , the ruin time, that is the
first hitting time of state 0.

Solution: We know that the mean hitting time has to solve the system of equations

k0 = 1
ki = 1 + 12 ki−1 + 12 ki+1 for i > 0

Taking differences, with ∆hi = hi+1 − hi , in the second equation we have

∆ki = ∆ki−1 − 2 ,

with ∆k0 = k1 .
It follows that ∆ki = k1 − 2i. Since ki+1 = ∆ki + ki = in=1 ∆kn + k1 , we get
P

i
X
ki+1 = (i + 1)k1 − 2 n = (i + 1)k1 − i(i + 1) = (i + 1)(k1 − i) .
n=1

Since we look for the minimal solution, among the non-negative ones, we see that any finite
k1 would not satisfy the condition of non negativity. Therefore, ki = ∞ for all i > 0.

You might also like