0% found this document useful (0 votes)
149 views

Markov Analysis: 1 Sasadhar Bera, IIM Ranchi

The document discusses Markov analysis and Markov processes. It provides examples of Markov processes in machine breakdowns, brand switching, and social class mobility. It defines key terms like state, transition probability matrix, and state probabilities. Markov processes can be used to analyze problems in various domains like predicting future states, estimating market shares, and modeling social mobility.

Uploaded by

pmcsic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views

Markov Analysis: 1 Sasadhar Bera, IIM Ranchi

The document discusses Markov analysis and Markov processes. It provides examples of Markov processes in machine breakdowns, brand switching, and social class mobility. It defines key terms like state, transition probability matrix, and state probabilities. Markov processes can be used to analyze problems in various domains like predicting future states, estimating market shares, and modeling social mobility.

Uploaded by

pmcsic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Markov Analysis

Sasadhar Bera, IIM Ranchi 1


Outline

 Introduction

 Examples of Markov Process 

 Transition Probability Matrix

 States and States Probabilities

 Predicting Short Term Future State Probabilities

 Steady State Probabilities or Equilibrium Conditions

 Absorbing States and Fundamental Matrix
2
Sasadhar Bera, IIM Ranchi
What is Markov Process
Markov Process is a probabilistic decision making tool
which determines the behavior of a system over time.

The future behavior of a system can be described by a set


of conditional probabilities, and current state of the
system. These conditional probabilities are called
transition probabilities.

Current  Transition Future 


State Probabilities State

3
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Machine Breakdown
A machine is always in one of the two states: working or
maintenance condition. If the machine is working at the beginning
of one day, then at the beginning of the next day, there is a 90%
chance that it is still working. If the machine is maintenance at the
beginning of one day, then at the beginning of the next day, there is
a 85% chance that it will be in working condition. A manager is
interested to know the following:
1) What is the probability that machine will work continuously for
seven days ? It helps to schedule the manpower.
2) What is the steady state (long term) behavior of the machine. It
help to determine production capacity for this machine.
Transition probability matrix Next Day
Working Maintenance
Working 0.90 0.10
Current Day
Maintenance 0.85 0.15
4
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Brand Switching
Suppose there are three brands of a soft drinks competing in a
market. These brands are labeled as 1, 2, and 3.

Let us assume that every week a consumer buys one of the


three brands. In each week, a consumer may either buy the
same brand that he bought in previous week or switch to a
different brand.

A consumer’s preference can be influenced by many factors,


such as brand loyalty and brand pressure (i.e., a consumer is
persuaded to purchase the same brand).

To know consumer behavior, sample surveys are frequently


conducted. Suppose that one of such surveys identifies the
following consumer behavior . . . . .
5
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis : Brand Switching (Contd.)
Those who currently bought brand 1, 51% buy the same brand, 35%
switch to brand 2 and 14% to brand 3, in the next week.
Those who currently bought brand 2, 80% buy the same brand, 12%
switch to brand 1 and 8% to brand 3, in the next week.
Those who currently bought brand 3, 92% buy the same brand, 3%
switch to brand 1 and 5% to brand 2, in the next week.
The brand choices of a consumer over different weeks can be
represented by a stochastic process that can enter three different
states, namely, brand 1, brand 2, and brand 3.
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current  Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92
6
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis : Brand Switching (Contd.)
The market share of a brand during a period is defined as the
average proportion of people who buy the brand during the period.
Let us suppose that markov process is appropriate for above
situation. The questions of interest might be:

1) What is the market share of a specific brand in a short run (say


in 3 months)?
2) What is the market share of a specific brand for long run (i.e.,
the average market share of the brand when the number of
weeks observed is sufficiently large, say 2 years)?
3) What is the expected number of weeks that a consumer stays
with a particular brand?

7
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Social Class Mobility
A problem of interest in the study of social structure is about the
transitions between the social status of successive generations in a
family. Sociologists often assume that the social class of a son
depends only on his parents’ social class, but not on his
grandparents’.
Each family in the society occupies one of the three social classes:
upper, middle, and lower, and classes are defined based on social
structure and occupation. Occupation evolves across different
generations. A transitions probability matrix between the social
classes is given below.
Transition probability matrix Following Generation
Upper Class Middle Class Lower Class
Upper Class 0.45 0.48 0.07
Current  Middle Class 0.05 0.70 0.25
Generation
Lower Class 0.01 0.50 0.49
8
Sasadhar Bera, IIM Ranchi
Example of Markov Analysis: Social Class Mobility (Contd.)
Let us assume transition between social class of the
successive generation is a markov process and model is
appropriate. Hence, sociologists is enabled to answer
questions such as:

1) What is the distribution of a family’s occupation in the


long run?

2) How many generations are necessary for a lower class


family to become a higher class family?

9
Sasadhar Bera, IIM Ranchi
Application Areas of Markov Process
1) Brand switching: Proportion of customers who will switch
from one brand to other over time.
2) TV show market share: Proportion of viewers switch from
one channel to other channel.
3) Car rental policy: Proportion of renters who return rental
car in various location.
4) Social class mobility: Transition of social class in successive
generations in a family.
5) Automobile insurance: Insurance premium design based on
probability of claim types over time for a policy holder.
6) Planning the movement of patients in hospital:
7) Machine replacement policy: Proportion of non‐availability
of machine hours over time.
8) Developing inspection strategy: Proportion of
nonconforming items produce by a manufacturing process.

10
Sasadhar Bera, IIM Ranchi
Revisiting Brand Switching Problem
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current  Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92

There are three states in this system: Brand 1, Brand 2, and


Brand 3. The probabilities for switching from brand 2 to
brand 3 is 0.08. Similarly, from brand 3 to 2 is 0.05. The
probability value in each cell is called transition probability.
The matrix is called transition probability matrix or matrix of
transition. The characteristic of transition probability matrix
is that sum of the probabilities in each row is 1.0 . The
probability of diagonal cell represents the repeat pattern,
purchase to the same brand (measures of brand loyalty).
11
Sasadhar Bera, IIM Ranchi
Definition of Some Terms in Markov Process
State of the system: State represents the condition of the
system at any particular time period.

State probability: The probability that the system will be in any


particular state at time period t.

Transition probability: The transition probability pij ( ≥ 0) is


the probability that the system will be in state j at time (t+1)
given that current state at time t is i. Transition probabilities
govern the manner in which the state of the system changes
from one period to the next. These are often represented in
a transition matrix.

Markov process: It is a system that changes from state to


state according to transition probability matrix.
12
Sasadhar Bera, IIM Ranchi
Markov Analysis: State Probabilities
The state probability is represented by i(n), where i represents
the state and n represents the nth time period.

Let us assume that system has two states. 1(1) denotes the
probability of the system being in state 1 in period 1. 2(1)
denotes the probability of the system being in state 2 in period
1. Then system state probabilities at period 1:
(1) = [1(1) 2(1)]

Similarly, (0) denotes the system state probabilities in initial


starting period (i. e. time period 0). (0) = [1(0) 2(0)]

Example: 0.30, 0.30, 0.40 are the proportion of purchase of


three different brands at week 0. (0) = [0.30 0.30 0.40]

13
Sasadhar Bera, IIM Ranchi
Markov Analysis: Assumptions
1) The number of states of a system is finite set. This set
contains collectively exhaustive and mutually exclusive
states of a system.
2) The system is a closed system. Close system indicates
that there will be no new addition or deletion of state
from the system.
3) The transition probability matrix remain same over
time period.
4) We will consider only first order markov process in
which future states of a system depends on the current
state and state transition matrix.

Note: Higher order markov process considers current state


and one or more previous states.
14
Sasadhar Bera, IIM Ranchi
Markov Analysis: System Behavior
Once the transition probability matrix has been constructed,
we can examine the dynamic behavior of the system. In
general, decision maker is interested to know two types of
behaviors: long‐term behavior and short‐term behavior.

Both the long‐term behavior and the short‐term behavior of a


system are completely determined by the system’s transition
probability matrix.

Short‐term behavior is solely dependent on the system’s


current state probabilities and the transition probability
matrix.

The long‐run behavior are referred to as the steady‐state


probabilities of the system.

15
Sasadhar Bera, IIM Ranchi
Markov Analysis: Short Term System Behavior
The probability that the system in a particular state at period
n+1 is completely determined by the state of the system at
period n (and not the state at period n‐1). This is referred to
as the memory‐less property.
Let us assume that system is having S number of states. (n)
denotes the state probabilities of the system in period n.
(n) = [1(n) 2(n) 3(n) . . . S(n)]
The state probabilities at (n+1)th period: (n+1) = (n)P,
where P is the transition probability matrix:

p11 p12 p13   .  .   .  p1S


p21 p22 p23   .  .   .  p2s
P = .        .       .     .   .   .   .
pS1 pS2 pS3   .   .    .  pSS
16
Sasadhar Bera, IIM Ranchi
Markov Analysis: Short Term System Behavior
n-step transition probabilities
Let us assume that system is having S number of states. (0)
denotes the state probabilities of the system in period 0 (i. e. initial
starting period). (0) = [1(0) 2(0) 3(0) . . . S(0)]

The state probabilities at 1st period: (1) = (0)P, where P is the


transition probability matrix.

The state probabilities at 2nd period: (2) = (1)P = ((0)P)P = (0)P2

Similarly, the state probabilities at nth period: (n) = (0)Pn

The probability matrix Pn represents all of the n‐step transition


probabilities at the same time and Pn called n‐step transition
probability matrix. The n‐step transition probability matrix is used
to calculate the short term behavior of a system. Hence, short term
behavior of a system depends on initial states probabilities ((0)),
transition probability matrix (P), and numbers of transitions (n). 17
Sasadhar Bera, IIM Ranchi
Predicting Short Term Behavior: Example
Revisiting brand switching problem

Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current  Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92

Q1) Suppose one customer purchased brand 1 at initial


shopping i. e. week 0.
a) Draw the tree diagram upto 2nd week. What is the
probability that the customer will purchase the same brand
in 2nd week?
18
Sasadhar Bera, IIM Ranchi
Brand Switching Problem: Tree Diagram
Customer Week 2
purchased
brand 1 1 0.51*0.51 = 0.260
at week 0
Week 1 0.51
2 0.179
0.35
0.51
1 0.14
3 0.071

0.12 1 0.042
0.35

2 2 0.280
0.80
0.14
0.08 3 0.028
0.03 1 0.004
3 0.05
2 0.007

0.92 3 0.129
Purchase probability of brand1 at 2nd week: 19
(0.260+ 0.042+0.004) = 0.306 Sasadhar Bera, IIM Ranchi
Predicting Short Term Behavior: Example (Contd.)
Q1) One customer purchased brand 1 at initial shopping i. e. week
0. What is the probability that the customer will repeat purchase
the brand 1 in 2nd week?

Initial probabilities = (0) = [1(0) 2(0) 3(0)] = [1 0 0]


Purchase probability in 1st week: (1) = (0) P
Purchase probability in 2nd week: (2) = (1)P = ((0)P)P = (0)P2,
where P is transition probability matrix

0.51    0.35    0.14
P = 0.12    0.80    0.08
0.03    0.05    0.92
Purchase probability in 1st week: (1) = [ 0.510 0.350 0.140 ]
Purchase probability in 2nd week: (2) = [ 0.306 0.466 0.228 ]

Probability customer will repeat purchase the brand 1 is 0.306


20
Sasadhar Bera, IIM Ranchi
Predicting Short Term Behavior: Example (Contd.)
Q2) Let us consider initial purchase probabilities of the three
brands (1, 2, and 3) are 0.30, 0.30, and 0.40 for the brand switching
problem. These are the brand share for week 0. What is the market
share of the brands in a short run (say after 12 weeks)?

Initial probabilities = (0) = [0.30 0.30 0.40]

Brand share after 12nd week: (12) = (0) P12, where P is transition
probability matrix.
0.51    0.35    0.14
P = 0.12    0.80    0.08
0.03    0.05    0.92

Calculated value: (12) = [ 0.119 0.350 0.531]

Brand share for brands 1, 2, and 3 are: 11.9%, 35%, and 53.1%
21
Sasadhar Bera, IIM Ranchi
Markov Analysis: Long Term System Behavior
Long term behavior of a system is determined by steady‐state
probabilities. The steady state probabilities are the limiting
probabilities by considering infinite number of trials or transitions
(n) i. e. if n → ∞ steady state probabilities:  = [1 2 . . . S]
where S is the number of states.

Long term behavior of a system does not depend on initial states


probability values ((0)).

The primary concern is with the values at steady‐state


probabilities, rather than with how many transitions would be
required to obtain the values.

If the markov process is irreducible and has finite states, then we


can calculate the steady‐state probabilities.

If all the states in a transition probability matrix communicate


with other states, then markov process is said to be irreducible.
22
Sasadhar Bera, IIM Ranchi
Predicting Long Term Behavior: Example
Revisiting brand switching problem

Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current  Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92

Q3) What is the market share of a specific brand for long run
(i.e., the average market share of the brand when the number
of weeks observed is sufficiently large, say 2 years)?

Q4) If there are 5000 consumer in the market, determine the


expected number of consumer in each brand in long run.
23
Sasadhar Bera, IIM Ranchi
Predicting Long Term Behavior: Example (Contd.)
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current  Brand 2 0.12 0.80 0.08
Week
Brand 3 0.03 0.05 0.92
Q3)
Step1: Let 1 , 2 , 3 are steady state probabilities for brand 1,
brand 2, and brand 3 , respectively.
Steady state equations:
1 = 0.51 1 + 0.35 2 + 0.14 3
2 = 0.12 1 + 0.80 2 + 0.08 3
3 = 0.03 1 + 0.05 2 + 0.92 3
1 + 2 + 3 = 1
Step2: Substitute 3 = 1‐ 1 ‐ 2 in first three equations.
Step3: Find out 1, 2, and 3 by solving the equations. 24
Sasadhar Bera, IIM Ranchi
Predicting Long Term Behavior: Example (Contd.)
Solution: 1 = 0.117 , 2 = 0.340, 3 = 0.543

In long term, brand 1 have 11.7% share, brand 2 has 34% and
brand 3 has 54.3% market share.

Q4) If there are 5000 consumer in the market, determine the


expected number of consumer in each brand in long run.

Solution:

Expected number of consumer for brand 1 = 0.117*5000 = 583


Expected number of consumer for brand 2 = 0.340*5000 = 1700
Expected number of consumer for brand 3 = 0.543*5000 = 2717

25
Sasadhar Bera, IIM Ranchi
Classification of System States

 Recurrent state

 Transient state

 Absorbing state 

26
Sasadhar Bera, IIM Ranchi
Recurrent State
A state is said to be recurrent state if the system move from
that state will definitely return back to the state again.
Transition probability matrix Next Week
Brand 1 Brand 2 Brand 3
Brand 1 0.51 0.35 0.14
Current Week Brand 2 0.12 0.80 0.08
Brand 3 0.03 0.05 0.92

Each state in above shown transition matrix is recurrent. If the


system (markov process) has finite states and all states are
recurrent, then we can calculate the steady‐state probabilities. 27
Sasadhar Bera, IIM Ranchi
Transient State
A state is called transient state, once the system leaves that
particular state will never return back to that state. A transient
system should have at least one transient state. It is to be
noted that all states in a system (markov process) should not
be transient.

State ‘A’ is transient state. 
28
Sasadhar Bera, IIM Ranchi
Absorbing State
A state is said to be absorbing states, once the system enter into 
that state becomes trapped and can never exit that state. A state 
i is an absorbing state if pii =1.

State ‘C’ and ‘D’ are absorbing states
29
Sasadhar Bera, IIM Ranchi
Absorbing State: Example
A system (Markov process) may have both transient and
absorbing states. One example of such process for account
receivable. Consider the following example:
A firm has a one‐month billing cycle. At the end of each
month, outstanding bills are classified into one of the
following categories: paid, less than one month old, more than
one month old and less than two month old, and bad debt.
Suppose that following four states are classified:
Paid: Who has paid the bill immediately
1: Less than one month outstanding
2: More than one month old and less than two month old
outstanding
Bad debt: Bad debt
The transition probability matrix is given below.
30
Sasadhar Bera, IIM Ranchi
Absorbing State: Example (Contd.)
Transition probability matrix Next State
Paid 1 2 Bad Debt
Paid 1 0 0 0
1 0.80 0 0.20 0
Current State
2 0.60 0 0 0.40
Bad Debt 0 0 0 1

In the above transition matrix ‘Paid’ and ‘Bad Debt’ are two
absorbing states. once the system enter into that state ‘Paid’
or ‘Bad Debt’ becomes trapped and can never exit from ‘Paid’
or ‘Bad Debt’.

States ‘1’ and ‘2’ are transient states. once the system leaves
state ‘1’ or ‘2’ will never return back to the state ‘1’ or ‘2’.
31
Sasadhar Bera, IIM Ranchi
Absorbing State: Example (Contd.)
Whenever a markov process has absorbing states, we do
not compute the steady state probabilities because
system ends up in one of the absorbing states. With
absorbing state we are interested to know the probability
that the system will end up in each of the absorbing
states.

32
Sasadhar Bera, IIM Ranchi
Revisiting the Absorbing State Example
Transition probability matrix Next State
Paid 1 2 Bad Debt
Paid 1 0 0 0
1 0.80 0 0.20 0
Current State
2 0.60 0 0 0.40
Bad Debt 0 0 0 1

If there are 30 lakh in one‐month old receivable and 45 lakh in


two‐month old receivables, determine total amount that will
be paid and the total amount that will become a bad debt.

33
Sasadhar Bera, IIM Ranchi
Probability of Absorption
Suppose that a process has both transient and absorbing
states. By an appropriate ordering of the states, the transition
matrix for such a process can be written as:

Ι O
P =
R Q

Then, (Ι‐Q)‐1 is called the fundamental matrix (N), where Ι is


the appropriate identity matrix. The probabilities of ending up
in each absorbing state starting from each transient state are
given by (Ι‐Q)‐1R.

34
Sasadhar Bera, IIM Ranchi
Rearranging Transition Matrix
Transition probability matrix Next State
Paid Bad Debt 1 2
Paid 1 0 0 0
Bad Debt 0 1 0 0
Current State
1 0.80 0 0 0.20
2 0.60 0.40 0 0

Fundamental Matrix (N) = (Ι‐Q)‐1
Ι =  1    0
0     1

Q =  0    0.20
0       0
35
Sasadhar Bera, IIM Ranchi
Paid and Bad debt Amount Calculation 

1 0.20 0.80       0
N = R =
0 1 0.60 0.40

0.92 0.08
NR =
0.60 0.40

30 lakh in one‐month old receivable and 45 lakh in two‐month 
old receivables.  Let it be represented by B:
B  = [ 30    45]

Matrix multiplication: B NR = [ 54.6  20.40 ]

Expected amount that will be paid = 54.6 lakh. Expected


amount of bad debt = 20.40 lakh 36
Sasadhar Bera, IIM Ranchi
Matrix Inversion
For two dimensional matrix:

M= a b
c d

Determinant of M: g = ad – bc

inverse of M = M = (d/g) (‐b/g)


(‐c/g) (a/g)

Excel function for matrix inversion: MINVERSE

In scientific calculator matrix inversion option is also


available.
37
Sasadhar Bera, IIM Ranchi

You might also like