Reference: "Detection, Estimation and Modulation Theory" by H.L. Van Trees
Reference: "Detection, Estimation and Modulation Theory" by H.L. Van Trees
Probabilistic Observation
Source
transition mechanism Space
Decision
Source o/p is one of two choices rule
Hypothesis H0, H1
1 H1 Decision
0 H0
pn(N)
1/2
1/4 1/4
-1 0 +1 N
H1 : r = 1+n
H0 : r = -1+n
DPAWCS 1 Jan12
DCE
Simple Binary Hypothesis Tests
r1
r * two known conditional
2 probability densities
r .
pr|H1 (R|H1), pr|H0 (R|H0) ??
.
rN
Use this info. to develop a suitable decision rule.
Bayes Criterion
Based on
1) The source o/p's are with P0, P1 a priori probabilities.
2) A cost is assigned to each possible course of action.C00, C10 , C11, C01
where Ci, j the jth hypothesis true and ith is hypothesis chosen.
Because the decision rule must say either H1 or H0 rule for dividing
the total observation space Z into two parts Z0 and Z1.
R
Source Z0
R
Z Observation
p r|H 0 (R|H0) Space
Say H1
DPAWCS 2 Jan12
DCE
The Risk in terms of transition probabilities and decision region.
R = C00P0 pr|H0 (R|H0) dR
Z0
Z = Z0 + Z1
Thus
R = P0C00 pr|H0 (R|H0) dR + P0C10 pr|H0 (R|H0) dR
Z0 Z Z0
Thus : all values of R where the second term is larger than the first should
be included in Z0 because they contribute a negative amount to the
integral.
DPAWCS 3 Jan12
DCE
Similarly, all values of R where the first term is larger than the second
should be excluded in Z0 (assigned to Z1) because they contribute a
positive amount to the integral.
Thus if:
P1(C01-C11) pr|H1 (R|H1) P0(C10-C00) pr|H0 (R|H0)
H1
pr|H1 (R|H1) P0(C10-C00)
>
< P (C -C )
pr|H0 (R|H0) 1 01 11
H0
DPAWCS 4 Jan12
DCE
Threshold
R Data (R) Device Decision
Processor (R) >
<
(a)
Threshold Decision
R Data ln (R) Device
Processor ln (R) >
< ln
(b)
Example 1
N-observations
Under H0 : R1 = -1 + n1 Under H1 : R1 = 1 + n1
R2 = -1 + n2 R2 = 1 + n2
. .
. .
RN =-1 + nN RN = 1 + nN
p r|H1 (R | H 1 ) p r |H (R i | H1 )
i 1
i 1
(R ) N
p r |H 0 ( R | H 0 )
p r |H
i 0
(R i | H 0 )
i 1
N (R i 1) 2
1
exp H1
2 2 2
<
i 1 >
N (R i 1) 2
1
exp H0
2 2 2
i 1
DPAWCS 5 Jan12
DCE
N H1
Ri
1 2
>
< 2N
ln
N i 1
H0
l(R) or l function of the received data
* Sufficient statistics
When making a decision, knowing the value of the sufficient statistic is
just as good as knowing R.
Thus, for this cost assignment Bayes cost is minimizing the total
probability of error. The test is then
H1
P1
(R) minimum error probability
< P0
H0
If in addition P0=P1 then
H1
pr|H1 (R | H1 )
(R ) 1 maximum likelihood test.
pr|H0 (R | H 0 ) <
H0
PF = pr|H0 (R|H0) dR
Z1
-false alarm : (We say the target is present when it is not.)
From now onward, we'll use the notation p(X) instead of px(X).
, .
PD = p(R|H1) dR
Z1
-Detection : (We say the target is present when it is.)
DPAWCS 6 Jan12
DCE
PM = pr|H1 (R|H1)dR = 1 - PD
Z0
-Miss : (We say the target is absent when it is present.)
N
H1
Ri
1 2
test
N < 2 N ln
i 1
H0
l
- Sum of N statistically independent Gaussian random variables.
l is Gaussian.
2
E[l|H0] = -1; E[l|H1] = 1, var[l|H0]= var[l|H1]= =12
N
1 (l 1) 2
p(l|H1)= exp
2 1 2
2 1
1 (l 1) 2
p(l|H0)= exp
2 1 2
2 1
PF =Pr[H1 chosen | H0 true] = Pr[l> | H0 true]
1 (l 1) 2
=
2 1
exp
2 1
2
dl
1
= Q
1
1 x2
[ because Q(x) =
2 exp dx
2
x
( y )2
exp dy
1
2 2 2
1 z2 y
=
2 y
exp dz (Substituting
2
= z)
= Q ]
DPAWCS 7 Jan12
DCE
PM = Pr[H0 chosen | H1 true] = Pr[l< | H1 true]
1
= pl | H1 dl
2 1
1 (l 1) 2
=1-
2 1
exp
2 1
2
dl
1
= 1- Q
1
Example 2
H0 : Ri N(0, 02)
H1 : Ri N(0, 12), 1 > 0 , i=1, 2, ..., N
log LRT H1
N
1
N
R i 1
2
i T
<
H0
* For performance need to find statistics of the sufficient
statistic R 2i (Chi-squared random variable.)
i
Now, if all costs and a priori probabilities are known, we can find a Bayes
test.
DPAWCS 8 Jan12
DCE
R
* Observe that as P1 changes
the decision regions for Bayes
test change and therefore PF and
C00 C11 PM change.
0 P1 = 1 P1
PM=PF=0
If we assume a P1, say P1 = P1* and design Bayes test then,
(1-P1*)(C10-C00)
Threshold = is fixed.
P1* (C01-C11)
and PF = Pr[(R)> | H0]
PM = Pr[(R)< | H1] are also fixed since is fixed.
RF(P1)
RB(P1)
0 1 P1
P1=P1* Pmin-max
Thus RF(P1) RB(P1) because the Bayes test minimizes the Risk.
A Bayes test designed to minimize the maximum possible risk is called a
"minimax test". Hence we choose RF to be the horizontal minimax
equation.
(C11-C00) + (C01-C11)PM - (C10-C00)PF = 0
Let C01 = CM
C10 = CF the risk is,
RB = CFPF + P1(CMPM-CFPF)
= P0CFPF + P1CMPM
DPAWCS 9 Jan12
DCE
The minimax equation is
CMPM = CFPF
Solving (**) for gives the threshold. The value of given by (**) will
be non-negative because p(|H0) is zero for negative values of .
DPAWCS 10 Jan12
DCE
Therefore we decrease until we obtain the largest possible '.
Sufficient Statistic
l - Sufficient statistic
R Processor
(Observables) t - Indifferent statistic
e.g
i) H1 : R1 = m +n1 H0 : R1 = n1
R2 = n2 R2 = n2
n1 , n2 are statistically independent. R2 irrelevant statistics
DPAWCS 11 Jan12
DCE
e.g
H1 : ri = m + ni , i = 1, 2, , N
H0 : ri = ni , i = 1, 2, , N
H1
N
R i ><
1 Nm
l= ln
N i 1 H0 Nm 2
Nm
d
p(l|H0) p(l|H1)
PF
Nm ln d
Threshold : ln
Nm 2 d 2
PD
Thus,
DPAWCS 12 Jan12
DCE
1 x2
PF = exp dx
2 2
ln d
d 2
ln d
= Q
d 2
(x - d) 2
1
PD = exp dx
2 2
ln d
d 2
y2
1
= exp dy
2
ln d 2
d 2
ln d
= Q
d 2
=0 ; ln = - PF = PD = 1 (H1)
; PF = PD = 0 (H0)
d=2.0
d=1.0 Receiver
Operating
d=0.5 Characteristic
(ROC)
PD
Increasing
PF
DPAWCS 13 Jan12
DCE
Performance increases monotically with d. (As we would expect)
Pr() P0 PF + P1 PM
P0 = P1 = ( = 1)
Pr() = (PF + PD)
1 x2
= exp dx
d 2 2
2
d
= Q
2
1 1 x 2 1 x2
1 exp Q(x) exp
2 x
x 2
2 2 x
2
1 x2
Q(x) < exp ; x>0
2 2
Example 2
Unequal variances
H1
N
2 02 12
l (R) = R i2 >
< ln - N ln 0
12 02 1
i 1 H0
= (1>0)
Consider N=2
DPAWCS 14 Jan12
DCE
2 z2
1
Pr(z |H0) = d z
2
exp dz
2
2 0 2 0
2
0
1 z2
= 2 .z.exp dz
2
0 2 0
1 l d l =exp
= 2 2 2
exp
2 2
2 0 0 0
Similarly, PD = exp
2 2
1
PD = (PF ) 0 / 1
2 2
(ROC)
02
ln PD = 2 ln PF
1
PF < 1 PD 12/02
e.g
12/02 = 2 PF = 0.2 PD = (0.2)1/2 = 0.447
12/02 = 4 PD = (0.2)1/4 = 0.6687
Example 3
Poisson distribution of events : Our observation is just this number which
obeys a Poisson distribution on both hypothesis, i.e.,
Pr( n events ) =
m i n e m i
, n = 0, 1, 2,
n!
i = 0, 1
E[n] = mi
m1
n H1
(n) = exp m1 m 0 >
<
0
m
H0
H1
n ln 1 m1 m 0
m
m0 < ln
H0
DPAWCS 15 Jan12
DCE
H1
n [ ln m1 ln m0]
ln + ( m1 m0)
<
H0
If m1 > m0, H1
n ln m1 m 0
< ln m1 ln m 0
H0
If m1 < m0, H0
n ln m1 m 0
< ln m1 ln m 0
H1
H1
n ln m1 m 0 = 1 consider the integer values
< ln m1 ln m 0
H0
integer only
H1
n 1 , 1 = 0, 1, 2, ...
<
H0
Now,
PD = Pr[n 1|H1] = 1- Pr[n < 1|H1]
1 1
m1 n e m1
= 1- , 1 = 0, 1, 2, ...
n 0 n!
Similarly,
PF = Pr[n 1|H0] = 1- Pr[n < 1|H0]
1 1
m 0 n e m0
= 1-
n 0 n!
DPAWCS 16 Jan12
DCE
1
ROC
* consists of a
series of points
PD
1 0 - 1
PF 0 1- e m0
m0 = 2, m1 = 4
m0 = 4, m1 = 10
PF
LRT 1 PF PD
0 0 1 1
1 1 1- e m0 1- e m1
1 m0
If PF to have an intermediate value between 1 and 1- e m0 , say 1- e .
2
PF PF
LRT0 LRT1
1 .P
F + 1 PF
2 2
LRT0 LRT1
1 1
= .1 + .(1- e m0 )
2 2
1
= 1- . e m0
2
Therefore the test is
1
If n = 0, say H1 with probability .
2
1
say H0 with probability .
2
DPAWCS 17 Jan12
DCE
n 1, say H1
1 1
PD = . PD + .PD
2 2
LRT0 LRT1
= 0.5 . 1 + 0.5 (1- e m1 )
1
= 1 . e m1
2
DPAWCS 18 Jan12
DCE