Markov Analysis: 1 Sasadhar Bera, IIM Ranchi
Markov Analysis: 1 Sasadhar Bera, IIM Ranchi
Introduction
Examples of Markov Process
Transition Probability Matrix
States and States Probabilities
Predicting Short Term Future State Probabilities
Steady State Probabilities or Equilibrium Conditions
Absorbing States and Fundamental Matrix
2
Sasadhar Bera, IIM Ranchi
What is Markov Process
Markov Process is a probabilistic decision making tool
which determines the behavior of a system over time.
3
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Machine Breakdown
A machine is always in one of the two states: working or
maintenance condition. If the machine is working at the beginning
of one day, then at the beginning of the next day, there is a 90%
chance that it is still working. If the machine is maintenance at the
beginning of one day, then at the beginning of the next day, there is
a 85% chance that it will be in working condition. A manager is
interested to know the following:
1) What is the probability that machine will work continuously for
seven days ? It helps to schedule the manpower.
2) What is the steady state (long term) behavior of the machine. It
help to determine production capacity for this machine.
Transition probability matrix Next Day
Working Maintenance
Working 0.90 0.10
Current Day
Maintenance 0.85 0.15
4
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Brand Switching
Suppose there are three brands of a soft drinks competing in a
market. These brands are labeled as 1, 2, and 3.
7
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Social Class Mobility
A problem of interest in the study of social structure is about the
transitions between the social status of successive generations in a
family. Sociologists often assume that the social class of a son
depends only on his parents’ social class, but not on his
grandparents’.
Each family in the society occupies one of the three social classes:
upper, middle, and lower, and classes are defined based on social
structure and occupation. Occupation evolves across different
generations. A transitions probability matrix between the social
classes is given below.
Transition probability matrix Following Generation
Upper Class Middle Class Lower Class
Upper Class 0.45 0.48 0.07
Current Middle Class 0.05 0.70 0.25
Generation
Lower Class 0.01 0.50 0.49
8
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Social Class Mobility (Contd.)
Let us assume transition between social class of the
successive generation is a markov process and model is
appropriate. Hence, sociologists is enabled to answer
questions such as:
9
Sasadhar Bera, IIM Ranchi
Application Areas of Markov Process
1) Brand switching: Proportion of customers who will switch
from one brand to other over time.
2) TV show market share: Proportion of viewers switch from
one channel to other channel.
3) Car rental policy: Proportion of renters who return rental
car in various location.
4) Social class mobility: Transition of social class in successive
generations in a family.
5) Automobile insurance: Insurance premium design based on
probability of claim types over time for a policy holder.
6) Planning the movement of patients in hospital:
7) Machine replacement policy: Proportion of non‐availability
of machine hours over time.
8) Developing inspection strategy: Proportion of
nonconforming items produce by a manufacturing process.
10
Sasadhar Bera, IIM Ranchi
Revisiting Brand Switching Problem
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92
Let us assume that system has two states. 1(1) denotes the
probability of the system being in state 1 in period 1. 2(1)
denotes the probability of the system being in state 2 in period
1. Then system state probabilities at period 1:
(1) = [1(1) 2(1)]
13
Sasadhar Bera, IIM Ranchi
Markov Analysis: Assumptions
1) The number of states of a system is finite set. This set
contains collectively exhaustive and mutually exclusive
states of a system.
2) The system is a closed system. Close system indicates
that there will be no new addition or deletion of state
from the system.
3) The transition probability matrix remain same over
time period.
4) We will consider only first order markov process in
which future states of a system depends on the current
state and state transition matrix.
15
Sasadhar Bera, IIM Ranchi
Markov Analysis: Short Term System Behavior
The probability that the system in a particular state at period
n+1 is completely determined by the state of the system at
period n (and not the state at period n‐1). This is referred to
as the memory‐less property.
Let us assume that system is having S number of states. (n)
denotes the state probabilities of the system in period n.
(n) = [1(n) 2(n) 3(n) . . . S(n)]
The state probabilities at (n+1)th period: (n+1) = (n)P,
where P is the transition probability matrix:
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92
0.12 1 0.042
0.35
2 2 0.280
0.80
0.14
0.08 3 0.028
0.03 1 0.004
3 0.05
2 0.007
0.92 3 0.129
Purchase probability of brand1 at 2nd week: 19
(0.260+ 0.042+0.004) = 0.306 Sasadhar Bera, IIM Ranchi
Predicting Short Term Behavior: Example (Contd.)
Q1) One customer purchased brand 1 at initial shopping i. e. week
0. What is the probability that the customer will repeat purchase
the brand 1 in 2nd week?
0.51 0.35 0.14
P = 0.12 0.80 0.08
0.03 0.05 0.92
Purchase probability in 1st week: (1) = [ 0.510 0.350 0.140 ]
Purchase probability in 2nd week: (2) = [ 0.306 0.466 0.228 ]
Brand share after 12nd week: (12) = (0) P12, where P is transition
probability matrix.
0.51 0.35 0.14
P = 0.12 0.80 0.08
0.03 0.05 0.92
Brand share for brands 1, 2, and 3 are: 11.9%, 35%, and 53.1%
21
Sasadhar Bera, IIM Ranchi
Markov Analysis: Long Term System Behavior
Long term behavior of a system is determined by steady‐state
probabilities. The steady state probabilities are the limiting
probabilities by considering infinite number of trials or transitions
(n) i. e. if n → ∞ steady state probabilities: = [1 2 . . . S]
where S is the number of states.
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92
Q3) What is the market share of a specific brand for long run
(i.e., the average market share of the brand when the number
of weeks observed is sufficiently large, say 2 years)?
In long term, brand 1 have 11.7% share, brand 2 has 34% and
brand 3 has 54.3% market share.
Solution:
25
Sasadhar Bera, IIM Ranchi
Classification of System States
Recurrent state
Transient state
Absorbing state
26
Sasadhar Bera, IIM Ranchi
Recurrent State
A state is said to be recurrent state if the system move from
that state will definitely return back to the state again.
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current Week Brand 2 0.12 0.80 0.08
Brand 3 0.03 0.05 0.92
State ‘A’ is transient state.
28
Sasadhar Bera, IIM Ranchi
Absorbing State
A state is said to be absorbing states, once the system enter into
that state becomes trapped and can never exit that state. A state
i is an absorbing state if pii =1.
State ‘C’ and ‘D’ are absorbing states
29
Sasadhar Bera, IIM Ranchi
Absorbing State: Example
A system (Markov process) may have both transient and
absorbing states. One example of such process for account
receivable. Consider the following example:
A firm has a one‐month billing cycle. At the end of each
month, outstanding bills are classified into one of the
following categories: paid, less than one month old, more than
one month old and less than two month old, and bad debt.
Suppose that following four states are classified:
Paid: Who has paid the bill immediately
1: Less than one month outstanding
2: More than one month old and less than two month old
outstanding
Bad debt: Bad debt
The transition probability matrix is given below.
30
Sasadhar Bera, IIM Ranchi
Absorbing State: Example (Contd.)
Transition probability matrix Next State
Paid 1 2 Bad Debt
Paid 1 0 0 0
1 0.80 0 0.20 0
Current State
2 0.60 0 0 0.40
Bad Debt 0 0 0 1
In the above transition matrix ‘Paid’ and ‘Bad Debt’ are two
absorbing states. once the system enter into that state ‘Paid’
or ‘Bad Debt’ becomes trapped and can never exit from ‘Paid’
or ‘Bad Debt’.
States ‘1’ and ‘2’ are transient states. once the system leaves
state ‘1’ or ‘2’ will never return back to the state ‘1’ or ‘2’.
31
Sasadhar Bera, IIM Ranchi
Absorbing State: Example (Contd.)
Whenever a markov process has absorbing states, we do
not compute the steady state probabilities because
system ends up in one of the absorbing states. With
absorbing state we are interested to know the probability
that the system will end up in each of the absorbing
states.
32
Sasadhar Bera, IIM Ranchi
Revisiting the Absorbing State Example
Transition probability matrix Next State
Paid 1 2 Bad Debt
Paid 1 0 0 0
1 0.80 0 0.20 0
Current State
2 0.60 0 0 0.40
Bad Debt 0 0 0 1
33
Sasadhar Bera, IIM Ranchi
Probability of Absorption
Suppose that a process has both transient and absorbing
states. By an appropriate ordering of the states, the transition
matrix for such a process can be written as:
Ι O
P =
R Q
34
Sasadhar Bera, IIM Ranchi
Rearranging Transition Matrix
Transition probability matrix Next State
Paid Bad Debt 1 2
Paid 1 0 0 0
Bad Debt 0 1 0 0
Current State
1 0.80 0 0 0.20
2 0.60 0.40 0 0
Fundamental Matrix (N) = (Ι‐Q)‐1
Ι = 1 0
0 1
Q = 0 0.20
0 0
35
Sasadhar Bera, IIM Ranchi
Paid and Bad debt Amount Calculation
1 0.20 0.80 0
N = R =
0 1 0.60 0.40
0.92 0.08
NR =
0.60 0.40
30 lakh in one‐month old receivable and 45 lakh in two‐month
old receivables. Let it be represented by B:
B = [ 30 45]
Matrix multiplication: B NR = [ 54.6 20.40 ]
M= a b
c d
Determinant of M: g = ad – bc