G Wfbut DX RJ 0 R HLMV
G Wfbut DX RJ 0 R HLMV
Eigenvectors
Minor in AI
February 20, 2025
1
Minor in AI 2
• Markov Property: The future state depends only on the current state, not the
past.
N
X
Pij = 1
j=1
• S1 (Happy)
• S2 (Sad)
• The values in V stabilize after repeated iterations, indicating that the system has
reached a steady state.
Minor in AI 4
Real-World Applications
• Weather Prediction: Modeling the likelihood of sunny, rainy, or cloudy days.
P × V∞ = V∞
This means that after applying the transition matrix multiple times, the probabilities
remain constant. To find this steady state, we solve:
0.6 0.4 x x
=
0.3 0.7 y y
Expanding:
0.6x + 0.4y = x
0.3x + 0.7y = y
Rearranging both equations:
−0.4x + 0.4y = 0
0.3x − 0.3y = 0
Solving:
x=y
Since x + y = 1:
x = 0.4286, y = 0.5714
Thus, the steady-state distribution is:
0.4286
V∞ =
0.5714
Minor in AI 5
• The output confirms that the Markov chain converges to [0.4286, 0.5714], which
matches our earlier calculations.
Vnew = M × Vold
The steady-state distribution V∞ satisfies:
M × V∞ = V∞
which leads to solving:
Minor in AI 6
0.6 0.4 x x
=
0.3 0.7 y y
Expanding:
0.6x + 0.4y = x
0.3x + 0.7y = y
Rearranging both equations:
x = 0.4286, y = 0.5714
Thus, the steady-state distribution is:
0.4286
V∞ =
0.5714
plt . xlim ( -1 , 1)
plt . ylim ( -1 , 1)
plt . legend ()
plt . title ( " Vector ␣ Convergence ␣ Under ␣ Matrix ␣ Multiplication " )
plt . xlabel ( "X - axis " )
plt . ylabel ( "Y - axis " )
plt . show ()
Minor in AI 8
• The colors in the graph show how the vectors evolve and converge.
Vt+1 = P × Vt
A Markov chain reaches a steady-state distribution when repeated applications of
the transition matrix no longer change the state vector:
P × V∞ = V∞
P11 x + P12 y = x
P21 x + P22 y = y
Minor in AI 9
V′ =M ×V
Effects of a Transformation
• Scaling: The vector’s length can increase or decrease.
M v = λv
det(M − λI) = 0
Eigenvectors are obtained by solving:
(M − λI)v = 0
Key Concepts