0% found this document useful (0 votes)
15 views

Llecture2 1

xiamen university finance risk management presentation

Uploaded by

zhangenming2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Llecture2 1

xiamen university finance risk management presentation

Uploaded by

zhangenming2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Risk Management

Lecture 2-1: Probability Theory

Chen Tong

SOE & WISE, Xiamen University

September 12, 2024

Chen Tong (SOE&WISE) Risk Management September 12, 2024 1 / 62


Probability Theory

This chapter includes:

▶ Random Variables and Probability Distributions

▶ Joint Distributions

▶ Features of Probability Distributions

▶ The Normal and Related Distributions

Chen Tong (SOE&WISE) Risk Management September 12, 2024 2 / 62


Probability Theory

1. Random Variables and Probability Distributions

Chen Tong (SOE&WISE) Risk Management September 12, 2024 3 / 62


▶ Experiment: flip a coin and count the number of times the coin
turns up heads.
▶ In theory, the experiment can be indefinitely repeated and has a
well-defined set of outcomes.
▶ A random variable takes on numerical values and has an outcome
that is determined by the experiment.
▶ Notation: random variables are usually denoted by upper case letters
(e.g. X), particular realizations are denoted by the corresponding
lowercase letters (e.g. x = 3 ).
▶ Example: X = 1 if the coin turns up heads and X = 0 if the coin
turns up tails.
▶ Random variables that only take on the values zero and one are
called Bernoulli variables or binary variables.

Chen Tong (SOE&WISE) Risk Management September 12, 2024 4 / 62


▶ We often consider discrete random variables (0, 1, 2, . . .).
▶ Simplest example: Bernoulli variable:

P(X = 1) = 0.5, P(X = 0) = 0.5, P(X = 1) + P(X = 0) = 1

▶ In general, the probability can be any number between zero and one.
Call this number θ :
P(X = 1) = θ, P(X = 0) = 1 − θ

▶ If X takes on k possible values {x1 , x2 , . . . , xk }, then the


probabilities p1 , p2 , . . . , pk are defined by
X
pj = P (X = xj ) , j = 1, 2, . . . , k, pj = 1
j

▶ The probability density function (pdf) of X summarizes the


information concerning the possible outcomes of X and the
corresponding probabilities:
pj = f (xj ) , j = 1, 2, . . . , k

Chen Tong (SOE&WISE) Risk Management September 12, 2024 5 / 62


▶ Example: number of free throws made by a basketball player out of
two attempts, {0, 1, 2}
▶ pdf of X is given by f (0) = .20, f (1) = .44, f (2) = .36

Chen Tong (SOE&WISE) Risk Management September 12, 2024 6 / 62


▶ The probability to observe a certain realization xj is always between
zero and one:
0 ≤ P (X = xj ) ≤ 1

▶ The sum of the probabilities of all realizations of a random variable


is always equal to one:
X k
f (xj ) = 1
j=1

▶ The cumulative distribution function (cdf) of a discrete random


variable is obtained by summing the pdf over all values xj such that
xj ≤ x( for a given point x)
X
F (x) ≡ f (x) = P(X ≤ x)
xj ≤x

Chen Tong (SOE&WISE) Risk Management September 12, 2024 7 / 62


▶ cdf of a discrete random variable

Chen Tong (SOE&WISE) Risk Management September 12, 2024 8 / 62


▶ Continuous random variables take on real values

▶ It makes no sense to calculate the probability that a continuous


random variable takes on a particular value

▶ We compute events involving a range of values:

▶ Example: constants a and b with a < b

▶ The probability that X lies between the numbers a and b,


P(a ≤ X ≤ b), is the area under the pdf between points a and b

Chen Tong (SOE&WISE) Risk Management September 12, 2024 9 / 62


▶ Continuous random variables may be described by a pdf which is
defined as a non-negative function for any real number x,
Z +∞
f (x) ≥ 0 so that f (x)dx = 1
−∞

▶ The probability of a realization in the interval [a, b] is the area under


the pdf from a to b,
Z b
P(a ≤ X ≤ b) = f (x)dx ≥ 0
a

▶ The cdf of a continuous random variable is defined as


Z x
∆F (x)
F (x) ≡ f (x)dx = P(X ≤ x) with f (x) =
−∞ ∆x

Chen Tong (SOE&WISE) Risk Management September 12, 2024 10 / 62


▶ cdf of a continuous random variable

Chen Tong (SOE&WISE) Risk Management September 12, 2024 11 / 62


Properties:
▶ 0 ≤ F (x) ≤ 1
▶ if x2 > x1 , then F (x2 ) ≥ F (x1 )
▶ F (+∞) = 1
▶ F (−∞) = 0
▶ For any number c, P(X > c) = 1 − F (c)
▶ For any numbers a < b, P(a < X ≤ b) = F (b) − F (a)

Chen Tong (SOE&WISE) Risk Management September 12, 2024 12 / 62


Probability Theory

2. Joint Distributions

Chen Tong (SOE&WISE) Risk Management September 12, 2024 13 / 62


▶ We are usually interested in more than one variable
⇒ Joint distributions
▶ Consider two random variables X and Y with a joint probability
distribution which can be described by the joint probability density
function fX ,Y (x, y )
▶ If X and Y are discrete random variables, then the joint probability
density function is given by

fX ,Y (x, y ) = P(X = x, Y = y )

▶ Properties:
P P
fX ,Y (x, y ) ≥ 0, X y fX ,Y (x, y ) = 1 in discrete case
R R
fX ,Y (x, y ) ≥ 0, X y fX ,Y (x, y )dydx = 1 in continuous case

Chen Tong (SOE&WISE) Risk Management September 12, 2024 14 / 62


Example:
21 2
if x 2 ≤ y ≤ 1

fX ,Y (x, y ) = 4 x y
0 otherwise
Since x 2 ≤ y ≤ 1 and −1 ≤ x ≤ 1, fX ,Y (x, y ) cannot be negative:
fX ,Y (x, y ) ≥ 0
The area under the joint distribution is equal to one:
Z 1 Z 1 Z 1 Z 1   Z 1  1 !
21 2 21 2 2
fX ,Y (x, y )dydx = x y dydx = x y dx
−1 x2 −1 x2 4 −1 8 x2
Z 1  
21 2 21 2 4
= x − x x dx
−1 8 8
Z 1 
21 2 4

= x 1−x dx
−1 8
 1
21 3 21 7 4 4
= x − x = + =1
24 56 −1 8 8

Chen Tong (SOE&WISE) Risk Management September 12, 2024 15 / 62


▶ The realization of X and Y within a given interval is
Z b Z d
P(a ≤ X ≤ b, c ≤ Y ≤ d) = fX ,Y (x, y )dydx
a c

The cumulative joint probability density functions

F (z, w ) = P(X ≤ z, Y ≤ w )

are XX
F (z, w ) = fX ,Y (x, y )
x≤z y ≤w

in the discrete case and


Z z Z w
F (z, w ) = fX ,Y (x, y )dydx
−∞ −∞

in the continuous case

Chen Tong (SOE&WISE) Risk Management September 12, 2024 16 / 62


Marginal pdf
▶ If X and Y are independent,

fX ,Y (x, y ) = fX (x)fY (y ),

where fX (x) and fY (y ) are marginal probability density functions,


defined as:
 P
fX (x) = R y fX ,Y (x, y ) in the dicrete case
f (x, s)ds in the continuous case
y X ,Y

and
 P
fY (y ) = R X fX ,Y (x, y ) in the dicrete case
f (s, y )ds in the continuous case
X X ,Y

Example above:
Z 1  1
21 2 21 2 2 21 2 21 2 4 21 2
x 1 − x4

fX (x) = x ydy = x y = x − x x =
x2 4 8 x2 8 8 8

Chen Tong (SOE&WISE) Risk Management September 12, 2024 17 / 62


Conditional Distributions (∗ ∗ ∗)

▶ In econometrics, we are interested in how a random variable X (or a


set of random variables) affects a random variable Y
⇒ Conditional probability density function of Y given X :

fX ,Y (x, y )
fY |X (y | x) = with fX (x) > 0
fX (x)

▶ Properties: fY |X (y | x) ≥ 0 and
 P
R ∞y fY |X (y | x) = 1 in the discrete case
f (y | x)dy = 1 in the continuous case
−∞ Y |X

▶ Relationship between conditional and joint probabilities:

fY ,X (y , x) = fY |X (y | x)fX (x)

21 2
▶ Example above: fY |X (y | x) = f (x,y ) 4 x y 2y
fX (x) = 21 2 4 = 1−x 4
8 x (1−x )

Chen Tong (SOE&WISE) Risk Management September 12, 2024 18 / 62


Copula Function

▶ Sklar’s theorem states that any multivariate joint distribution can be


written in terms of univariate marginal distribution functions and a
copula which describes the dependence structure between the
variables.
▶ In bivariate case

f (x, y ) =f (x) · f (y ) · C (F (x), F (y )) ,

or expressed it in log-form

log f (x, y ) = log f (x) + log f (y ) + log C (F (x), F (y )) ,

where f (x) and f (y ) are the marginal pdf, and F (x) and F (y ) are
the marginal cdf.

Chen Tong (SOE&WISE) Risk Management September 12, 2024 19 / 62


Gaussian Copula Function

▶ For two (standardized) random variable x and y , a commonly used


copula function is
  
1 1 −1
 x
C (x, y ) = p exp − (x, y ) R − I2
|R| 2 y

where R is the correlation matrix of x and y , and I is the identity


matrix.

▶ David Li (2000) "Gaussian Copula Model for Pricing CDO"


(collateralized debt obligations):
The Formula That Killed Wall Street.

▶ A more detailed introduction to Copula function will be given in


later class.

Chen Tong (SOE&WISE) Risk Management September 12, 2024 20 / 62


Probability Theory

3. Features of Probability Distributions

Chen Tong (SOE&WISE) Risk Management September 12, 2024 21 / 62


▶ We are mainly interested in a few aspects of the distributions of
random variables:
▶ "Measures of central tendency":
▶ Expected value (weighted average of all possible values of X )
▶ Median (splits the pdf into two equal parts)

▶ "Measures of variability":
▶ Variance (squared difference of X from its expected value)
▶ Standard deviation (square root of the variance)

▶ "Measures of association between two random variables":


▶ Covariance (measure of linear dependence between two random
variables)
▶ Correlation coefficient (unit-free measure of linear dependence
between two random variables)

Chen Tong (SOE&WISE) Risk Management September 12, 2024 22 / 62


Expected Value
▶ The expected value (or mean) E (X ) = µX of a discrete random
variable X is the weighted average of all possible realizations of X ,
where the probabilities of the realizations x are used as weights:
X
E (X ) = µX = xf (x)
x

▶ For a continuous random variable X , the expected value is


Z ∞
E (X ) = µX = xf (x)dx
−∞

▶ Calculation rules:

E (a) = a,
E (X + Y ) = E (X ) + E (Y ) = µX + µY
E (aX + b) = aE (X ) + b = aµX + b

Chen Tong (SOE&WISE) Risk Management September 12, 2024 23 / 62


Variance

▶ We do not only want to know the expected value but also the spread
of a distribution
⇒ How far is the distance of a random variable X from its expected
value?

▶ We usually consider the squared difference (X − µX )2


⇒ Elimination of signs and stronger "punishment" of larger
distances

Chen Tong (SOE&WISE) Risk Management September 12, 2024 24 / 62


▶ Since (X − µX )2 is a random variable itself, we may calculate the
expected distance from X to its mean:
( P
2
h
2
i f (x) (x − µX ) if discrete
Var(X ) = σX2 ≡ E (X − µX ) = R ∞X 2
−∞
f (x) (x − µX ) dx if continuous

▶ Calculation rule:

Var(aX + b) = Var(aX ) = a2 Var(X ) = a2 σX2

▶ The standard deviation sd(X ) of a random variable X p


is the
(positive) square root of the variance: sd(X ) = σX ≡ Var(X )

Chen Tong (SOE&WISE) Risk Management September 12, 2024 25 / 62


Chen Tong (SOE&WISE) Risk Management September 12, 2024 26 / 62
Standardizing a Random Variable

▶ We may define a new random variable Z by substracting its mean


and dividing by its standard deviation:
X −µ
Z=
σ
rewritten as Z = aX + b with a ≡ (1/σ) and b ≡ −(µ/σ)

E (Z ) = E (aX + b) = aE (X ) + b = (µ/σ) − (µ/σ) = 0

and
Var(Z ) = Var(aX + b) = a2 Var(X ) = σ 2 /σ 2 = 1


▶ Z is called a standardized random variable

Chen Tong (SOE&WISE) Risk Management September 12, 2024 27 / 62


Moments

▶ The nth central moment of the probability distribution of a random


variable X is  n
µn = E (X − µX )

h i
▶ The first central moment is E (X − µX )1 = E (X ) − µX = 0

h i
▶ The second central moment is the variance: E (X − µX )2

Chen Tong (SOE&WISE) Risk Management September 12, 2024 28 / 62


h i
▶ The third central moment E (X − µX )3 measures the skewness of
the distribution
▶ The third central moment of a symmetric distribution is zero
▶ If the distribution is skewed to the left, it has a negative skewness
▶ If the distribution is skewed to the right, it has a positive skewness
(example: wage distribution)
" 3 #
X −µ
Skew(X ) = E
σ

▶ The fourth central moment (kurtosis) is larger if the tails in the


distribution of X are thicker
" 4 #
X −µ
Kurt(X ) = E
σ

Chen Tong (SOE&WISE) Risk Management September 12, 2024 29 / 62


Covariance

▶ The covariance is a measure of the linear relationship between two


random variables X and Y
▶ Consider the random variable (X − µX ) (Y − µY ) :
▶ The covariance is the expected value of (X − µX ) (Y − µY ) :

Cov(X , Y ) = σXY ≡ E [(X − µX ) (Y − µY )]


 P P
= R ∞x R y∞f (x, y ) (x − µX ) (y − µY ) if discrete
−∞ −∞
f (x, y ) (x − µX ) (y − µY ) dxdy if continuous

Chen Tong (SOE&WISE) Risk Management September 12, 2024 30 / 62


Covariance


Cov(X , Y ) = E [(X − µX ) (Y − µY )]
= E [XY − µX Y − µY X + µX µY ]
= E (XY ) − µX µY

▶ If X and Y are independent, then their covariance is zero:

E (XY ) = E (X )E (Y ) = µX µY ⇒ Cov(X , Y ) = 0

- However, the converse is not necessarily true, i.e. random variables


may have a covariance of zero although they are not independent

Chen Tong (SOE&WISE) Risk Management September 12, 2024 31 / 62


Covariance

▶ Any random variable with E (X ) = 0 and E X 3 = 0 has the




property that, if Y = X 2 , then Cov(X , Y ) = 0

▶ X and Y = X 2 are clearly not independent


⇒ Weakness of the covariance as a general measure of association
between random variables
⇒ The covariance is useful in contexts when relationships are at
least approximately linear

▶ Calculation rules for variances and covariances:


Cov(X , X ) = Var(X ) Cov(aX + b, cY + d) = acCov(X , Y )
Var(X + Y ) = Var(X ) + Var(Y ) + 2 Cov(X , Y ),
Var(aX + bY ) = a2 Var(X ) + b 2 Var(Y ) + 2ab Cov(X , Y )

Chen Tong (SOE&WISE) Risk Management September 12, 2024 32 / 62


Correlation Coefficient

▶ The fact that the covariance depends on units of measurement is a


deficiency that is overcome by the correlation coefficient between X
and Y :
Cov(X , Y ) σXY
Corr(X , Y ) = ρXY = =
sd(X )sd(Y ) σX σY

▶ Cauchy-Schwarz inequality: | Cov(X , Y )| ≤ sd(X ) sd(Y )


−1 ≤ Corr(X , Y ) ≤ 1
▶ Cov(X , Y ) and Corr(X , Y ) always have the same sign

Chen Tong (SOE&WISE) Risk Management September 12, 2024 33 / 62


Correlation Coefficient

▶ If X and Y are independent, then Corr(X , Y ) = 0, but zero


correlation does not imply independence
▶ The correlation between X and Y is invariant to the units of
measurement of X or Y :
▶ If a1 a2 > 0 ⇒ Corr (a1 X + b1 , a2 Y + b2 ) = Corr(X , Y )
▶ If a1 a2 < 0 ⇒ Corr (a1 X + b1 , a2 Y + b2 ) = − Corr(X , Y )

Chen Tong (SOE&WISE) Risk Management September 12, 2024 34 / 62


Conditional Expectation

▶ Covariance and correlation measure the linear relationship between


two random variables and treat them symmetrically. We usually
want to explain Y in terms of X
▶ Suppose we know that X has taken on a particular value x
⇒ We can compute this expected value of Y , given that we know
this outcome of X
▶ We denote the expected value by E (Y | X = x)

Chen Tong (SOE&WISE) Risk Management September 12, 2024 35 / 62


The Law of Iterated Expectations

▶ When Y is a discrete random variable taking on values {y1 , . . . , ym },


then
Xm
E (Y | X = x) = yj fY |X (yj | x)
j=1

▶ When Y is continuous, then


Z +∞
E (Y | X = x) = yfY |X (y | x)dy
−∞

▶ E (Y | X = x) tells us how the expected value of Y varies with x

Chen Tong (SOE&WISE) Risk Management September 12, 2024 36 / 62


Chen Tong (SOE&WISE) Risk Management September 12, 2024 37 / 62
▶ Given the random variables Y and X , the conditional expected value
E (Y | X ) is a random variable whose value depends on the value of
X
"Note that E (Y | X = x) is a function of x. If
E (Y | X = x) = g (x), then E (Y | X ) = g (X )"

▶ The law of iterated expectations states that the expected value of


the conditional expected value of Y given X equals the expected
value of Y :
E [E (Y | X )] = E (Y )

Chen Tong (SOE&WISE) Risk Management September 12, 2024 38 / 62


▶ Proof for discrete random variables:
X
E [E (Y | X )] = E (Y | X = x)P(X = x)
x
XX
= yP(Y = y | X = x)P(X = x)
x y
XX
= yP(Y = y , X = x) = E (Y )
x y

Chen Tong (SOE&WISE) Risk Management September 12, 2024 39 / 62


Independence and Correlation

▶ If X and Y are independent, then E (Y | X ) = E (Y )

▶ If E (Y | X ) = E (Y ), then Cov(X , Y ) = 0 and Corr(X , Y ) = 0. In


fact, every function of X is uncorrelated with Y
⇒ If knowledge of X does not change the expected value of Y , then
X and Y must be uncorrelated
⇒ If X and Y are correlated, E (Y | X ) must depend on X

▶ But: If X and Y are uncorrelated, then E (Y | X ) could still depend


on X (Example: Y = X 2 ⇒ E (Y | X ) = X 2 )
⇒ The conditional expectation captures the nonlinear relationship
between X and Y that correlation analysis would miss entirely

Chen Tong (SOE&WISE) Risk Management September 12, 2024 40 / 62


Conditional Variance

▶ The variance of Y given X = x is given by

Var(Y | X = x) = E (Y − E (Y | X = x))2 | X = x
 

= E Y 2 | X = x − [E (Y | X = x)]2


▶ Example: Var( SAVING|INCOME ) = 400 + .25 INCOME


⇒ The variance in savings increases with income
("heteroscedasticity")
▶ If X and Y are independent, then Var(Y | X ) = Var(Y )

Chen Tong (SOE&WISE) Risk Management September 12, 2024 41 / 62


∗∗Transformation of R.V. (Random Variable)

▶ Y = r (X ) and X pdf
∼ fX

▶ the cdf of Y is
Z
G (y ) = Pr(Y ⩽ y ) = Pr(r (X ) ⩽ y ) = f (x)dx,
{x:r (X )⩽y }

▶ if r is monotonic, the pdf of Y is

ds(y )
g (y ) = f (s(y )) ,
dy

where s(·) is the inverse function of r . i.e. x = s(y ).

Chen Tong (SOE&WISE) Risk Management September 12, 2024 42 / 62


▶ For instance, compute the pdf of y if log(y ) ∼ N(µ, σ 2 )

Chen Tong (SOE&WISE) Risk Management September 12, 2024 43 / 62


Probability Theory

4. The Normal and Related Distributions

Chen Tong (SOE&WISE) Risk Management September 12, 2024 44 / 62


▶ The Normal Distribution

▶ The Standard Normal Distribution

▶ The Chi-Square Distribution

▶ The t-Distribution

▶ The F-Distribution

Chen Tong (SOE&WISE) Risk Management September 12, 2024 45 / 62


The Normal Distribution

▶ Assuming that random variables defined over populations are


normally distributed simplifies probability calculations
▶ The pdf of a normal random variable X is

1
√ exp −(x − µ)2 /2σ 2 ,
 
f (x) = −∞ < x < ∞
σ 2π

where µX = E (X ) and σX2 = Var(X )


▶ X is normally distributed with expected value µ and variance σ 2 ,
written as X ∼ N µ, σ 2


▶ Examples: Human heights, weights, test scores, county


unemployment rates, etc.
▶ Income has a log-normal distribution, i.e. Y = log(INCOME ) has a
normal distribution

Chen Tong (SOE&WISE) Risk Management September 12, 2024 46 / 62


Chen Tong (SOE&WISE) Risk Management September 12, 2024 47 / 62
The Moments of Normal Distribution

▶ For any non-negative integer p, the central moments is given by


(
0 if p is odd
E [(X − µ)p ] =
σ p (p − 1)!! if p is even

where n!!denotes the double factorial, that for even n, we have


n
2
Y
n!! = (2k) = n(n − 2)(n − 4) · · · 4 · 2,
k=1

while for odd n it is


n+1
2
Y
n!! = (2k − 1) = n(n − 2)(n − 4) · · · 3 · 1.
k=1

Chen Tong (SOE&WISE) Risk Management September 12, 2024 48 / 62


The (non)Central Moments of Normal Distribution
The non-central moments are defined by E [X p ], we have

Chen Tong (SOE&WISE) Risk Management September 12, 2024 49 / 62


The Standard Normal Distribution

▶ Special case of the normal distribution where the mean is zero


(µ = 0) and the variance is one σ 2 = σ = 1


▶ If a random variable Z has a standard normal distribution, then


Z ∼ N(0, 1)
▶ The pdf of a standard normal distribution is

1
ϕ(z) = √ exp −z 2 /2 ,

−∞ < z < ∞

▶ The standard normal cumulative distribution function is denoted


Φ(z) = P(Z < z)

Chen Tong (SOE&WISE) Risk Management September 12, 2024 50 / 62


Chen Tong (SOE&WISE) Risk Management September 12, 2024 51 / 62
The Chi-Square Distribution

▶ The chi-square distribution is obtained from independent, standard


normal variables

▶ Let Zi (i = 1, 2, . . . , n) be independent random variables, each


distributed as standard normal. Then a new random variable may be
defined as the sum of the squares of Zi :
n
X
X = Zi2
i=1

Chen Tong (SOE&WISE) Risk Management September 12, 2024 52 / 62


The Chi-Square Distribution

▶ Then X has a chi-squared distribution with n degrees of freedom:


X ∼ χ2n
▶ The chi-square distribution is the "ideal" counterpart of the normal
distribution in situations where the random variable is non-negative
▶ The form of the distribution varies with the number of degrees of
freedom, i.e. the number of random variables Zi included in X

Chen Tong (SOE&WISE) Risk Management September 12, 2024 53 / 62


Chen Tong (SOE&WISE) Risk Management September 12, 2024 54 / 62
The t-Distribution

▶ The t-distribution can be obtained from a standard normal and a


chi-square random variable:
▶ Let Z have a standard normal distribution, let X have a chi-square
distribution with n degrees of freedom and assume that Z and X are
independent. Then the random variable
Z
T =p
X /n

has a t-distribution with ν degrees of freedom, T ∼ tn


▶ The shape of the t-distribution is similar to that of a normal
distribution, except that the t-distribution has more probability mass
in the tails
▶ As the degrees of freedom get large, the t-distribution approaches
the standard normal distribution

Chen Tong (SOE&WISE) Risk Management September 12, 2024 55 / 62


The pdf of t-Distribution

▶ The density function of t-distribution with n degrees of freedom is


−(n+1)/2
Γ n+1
 
2 x2
f (x) = √  1+
nπΓ n2 n

where Γ(·) is the Gamma function.

▶ tn has heavier tails and the amount of probability mass in the tails is
controlled by the parameter n. For n = 1 the t distribution tn
becomes the standard Cauchy distribution, whereas for n → ∞ it
becomes the standard normal distribution N(0, 1).

Chen Tong (SOE&WISE) Risk Management September 12, 2024 56 / 62


Moments of the t-Distribution

▶ For n > 1, the raw moments of the t-distribution are



0 k odd
E Tk =
 h
1 k+1
 n−k  k i
 √πΓ n Γ 2 Γ 2 n 2 k even
(2)

where 0 < k < n.

▶ Moments of order n or higher do not exist.

Chen Tong (SOE&WISE) Risk Management September 12, 2024 57 / 62


Moments of the t-Distribution

▶ For a t-distribution with n degrees of freedom, the expected value is


n
0 if n > 1, and its variance is n−2 if n > 2. The skewness is 0 if
6
n > 3 and the excess kurtosis is n−4 if n > 4.

▶ The excess kurtosis is defined by kurtosis minus three.

Chen Tong (SOE&WISE) Risk Management September 12, 2024 58 / 62


Chen Tong (SOE&WISE) Risk Management September 12, 2024 59 / 62
The Standardized t-Distribution

▶ For n > 2, the Standardized t-Distribution is given by

T
ST = q , T ∼ tn
n
n−2

6
So we have its variance is 1, and the excess kurtosis is n−4 if n > 4.

▶ How to derive the pdf of standardized t-Distribution?

Chen Tong (SOE&WISE) Risk Management September 12, 2024 60 / 62


The F-Distribution

▶ Let X1 ∼ χ2k and X2 ∼ χ2k and assume that X1 and X2 are


1 2
independent. Then the random variable

(X1 /k1 )
F =
(X2 /k2 )

has an F-distribution with (k1 , k2 ) degrees of freedom, F ∼ Fk1 ,k2

▶ k1 - numerator degrees of freedom


▶ k2 - denominator degrees of freedom

Chen Tong (SOE&WISE) Risk Management September 12, 2024 61 / 62


Chen Tong (SOE&WISE) Risk Management September 12, 2024 62 / 62

You might also like