CH5115 State Parameter Estimation
ED 23 D 008
Assignment 3
Lakshmi S
1 a
Yen r Poisson distribution
PUN x N observations
given
Means variance poisson distribution
of
is X Y E ly Van y X
Mc estimate fishes information by
is given
flyn x
IT pre x
NET
i o
hog likelihood function is computed as
L lmfao Eg Kiba na
In x 22 kn
when
estimating o
of yen
i Vas
Ey Cu
e
x
E
I Ix 8 2
i I 17
2
The lower hound variance of X is given
by
var X Tn
estimation of X i e
am is s
t
omit
É
This the
is an
efficient estimator as
estimator is
independent of X
% Define the lambda values
lambda_values = [0.1, 1, 10];
% Define the number of observations
N = 1000; % You can adjust this as needed
% Initialize arrays to store results
variance_theta_ML = zeros(1, length(lambda_values));
fisher_information = zeros(1, length(lambda_values));
% Generate random samples from Poisson distribution for each lambda
for i = 1:length(lambda_values)
lambda = lambda_values(i);
samples = poissrnd(lambda, N, 1); % Generate Poisson-distributed samples
% Maximum Likelihood Estimation for lambda (theta)
theta_ML = sum(samples) / N;
% Calculate the Fisher Information
fisher_information(i) = N / lambda;
% Calculate the variance of the ML estimator
variance_theta_ML(i) = var(samples) / N;
end
% Display the results
for i = 1:length(lambda_values)
fprintf('Lambda = %.1f:\n', lambda_values(i));
fprintf(' Variance of ML estimator: %.4f\n', variance_theta_ML(i));
fprintf(' Fisher Information: %.4f\n\n', fisher_information(i));
end
Lambda = 0.1:
Variance of ML estimator: 0.0001
Fisher Information: 10000.0000
Lambda = 1.0:
Variance of ML estimator: 0.0010
Fisher Information: 1000.0000
Lambda = 10.0:
Variance of ML estimator: 0.0092
Fisher Information: 100.0000
% Define parameters
N = 300;
sigma_e2 = 1.5;
a = 4;
b = 5;
% Generate data with known parameters
X = randn(N, 1); % Random X values
epsilon = sqrt(sigma_e2) * randn(N, 1); % Random noise
1
Y = a * X + b + epsilon; % Observed Y values
% Likelihood functions for a and b
likelihood_a = @(a) -sum((Y - a*X - b).^2) / (2*sigma_e2);
likelihood_b = @(b) -sum((Y - a*X - b).^2) / (2*sigma_e2);
% Grid of values for a and b
a_values = linspace(0, 8, 100);
b_values = linspace(0, 10, 100);
% Calculate likelihood values for a and b
likelihood_values_a = arrayfun(likelihood_a, a_values);
likelihood_values_b = arrayfun(likelihood_b, b_values);
% Plot likelihood functions for a
subplot(2, 1, 1);
plot(a_values, likelihood_values_a);
xlabel('Parameter a');
ylabel('Log Likelihood');
title('Likelihood Function for Parameter a');
% Plot likelihood functions for b
subplot(2, 1, 2);
plot(b_values, likelihood_values_b);
xlabel('Parameter b');
ylabel('Log Likelihood');
title('Likelihood Function for Parameter b');
2
% Find ML estimates for a and b
[~, idx_a] = max(likelihood_values_a);
[~, idx_b] = max(likelihood_values_b);
% ML estimates
a_ML = a_values(idx_a);
b_ML = b_values(idx_b);
% Display results
fprintf('ML Estimate for a: %.4f\n', a_ML);
ML Estimate for a: 4.0404
fprintf('ML Estimate for b: %.4f\n', b_ML);
ML Estimate for b: 5.0505
3
Ib Linear regression problem
Y ax b te X follows
normal distribution
Fisher information for parameter a
8
Lia
It Egg gth
Log likelihood
In Ca
1k KITE
E Y
g7tbl
galnkia E secret
E E x
Ica Hash 261 Iz
Fisher information Joe parameter b
in Icb 72h kite
glnLCb
E lycut.gg
YzI1b7 Yz
3 a To determine with what degree of
confidence we can assert that the average
thepolymer is between
molecular weight of
a confidence
23000 15500 perform
we can
based on the sample data
interval calculation
Given
Sample mean 14678
Sample Std deviation 1445
Sample size 80
interval used 195 11
confidence typically
A 0 05
For 951 CI 2 1096
the margin error is computed by
E Zeit
fr
CI E E J t E
E 1 96
1g
325 04
CI 6 78 325 04 14678
325 043
14
I 14352 96 150030043
we can assert
hence with 95.1 confidence
the average modular weight of the polymer
that
14352096 15003 04
is in between
3b
Define hypothesis
o the mean scores
of the students from
institution A MD institution B Nz are
equal
score students from
417 The mean exam
of
less than the mean enam
institution A pre is
institution B Ms
Score of students from
195 1 Confidence Interval
Let CI 0005
ti
IEEE
t 1.224
853ft
To find the degreesof freedom
101 69 I 101
1.660
for t table we
get 2
Sh Ce 2 1 660
E z 10224
We can the null hypothesis Ho
reject to
This means there is sufficient evidence
no
conclude that 95.1 confidence interval
the exam
worse is
that intuition A performed
Than institution B
La
Given VCR is
gaussian
Ev
ven fn EJVGJeapf jztf.tl
fo Yr
Pen E Pin
WitI anti
1258k
n
If verge
tjÉÉVCk sinkhole
ÉIVCK cos
Etf k
w
Real part
imaginary
part
lunch Unenlfn Egf
I I
a RUG bae IUCN
II
an
very coset for
bn Gtfo k
GI
sin
very
Gaussian
that VCD follows
we know
distribution
pours
the DFT
that
en
implies are dependent
on Ucr
an bn
coefficients distribution
Gaussian
an bn follow
bn o
E
also Elan o
Vas Can 03 2,4
Van bn
tea
VCD eck
off bn are
Thus theoritically proving an
Gaussian distributed
2b
PIL
IIE anfbi
Druffn Ifn
Prr Cfn
IN
Yuu'tn jaitf.nl
jg.fnuceye
its
he consistent check for
For estimate to
bias variance
E Pruyn Éingforweepeiwn
inal
II z
Ier oyyage
To realise bias in the estimate
introduce the sequence
Ill een D
are
I
I otherwise
O
Pur fun
GE g felonyCD éJwnl
I Alun 8 Con 7 da
This the periodogram is the biased
the spectral density grew
version of
E pawn
V Wn
MMTEI.me aces I 51 1
as N a
hug A Cw 21 8 Wn
Applying
in we get
Eleven Durin
Is
UNIT crown
CIN
t
7 8 Inn
Van f Pur fun
as
fig D wn
consistent estimates
not
Periodogram is
an
the no of
U Because increasing
of PSD fn the variance or the
Samplesdoes not decrease
error in the estimates
% Parameters
N = 1000; % Number of samples
M = 10000; % Number of MC simulations
sigma = 1; % Variance of random process v[k]
% Initialize arrays to store results
real_parts_an = zeros(M, N);
imaginary_parts_bn = zeros(M, N);
% Perform MC simulations
for m = 1:M
% Generate a random sample of v[k] (assumed to be zero-mean)
v = sigma * randn(1, N);
% Compute DFT coefficients
F = fft(v);
% Extract real and imaginary parts of F
real_parts_an(m, :) = real(F);
imaginary_parts_bn(m, :) = imag(F);
end
% Plot histograms of real and imaginary parts of a_n and b_n
figure;
subplot(1, 2, 1);
histogram(real_parts_an(:), 100, 'Normalization', 'pdf');
title('Histogram of Real Parts of a_n');
subplot(1, 2, 2);
histogram(imaginary_parts_bn(:), 100, 'Normalization', 'pdf');
title('Histogram of Imaginary Parts of b_n');
1
% subplot(2, 2, 3);
% qqplot(real_parts_an(:));
% title('QQ Plot of Real Parts of a_n');
% subplot(2, 2, 4);
% qqplot(imaginary_parts_bn(:));
% title('QQ Plot of Imaginary Parts of b_n');
We can see that the DFT coefficients follow Gaussian distributions in MC simuations
% Parameters
M = 10000; % Number of realizations
N_values = [100, 200, 500, 1000, 10000]; % Different values of N
% Initialize arrays to store mean and variance
mean_Pfn = zeros(length(N_values), 1);
variance_Pfn = zeros(length(N_values), 1);
for i = 1:length(N_values)
N = N_values(i);
Pfn_realizations = zeros(M, N); % Store P[f_n] for each realization
for m = 1:M
% Generate a realization of the Gaussian process v[k]
vk = sigma * randn(1, N);
2
% Compute DFT coefficients and estimate PSD
V = fft(vk);
Pfn = abs(V).^2 / N;
Pfn_realizations(m, :) = Pfn;
end
% Compute mean and variance of P[f_n]
mean_Pfn(i) = mean(Pfn_realizations(:));
variance_Pfn(i) = var(Pfn_realizations(:));
end
% Plot mean and variance as functions of N
figure;
subplot(2, 1, 1);
plot(N_values, mean_Pfn);
xlabel('N');
ylabel('Mean of P[f_n]');
title('Mean of Estimated PSD vs. N');
subplot(2, 1, 2);
plot(N_values, variance_Pfn);
xlabel('N');
ylabel('Variance of P[f_n]');
title('Variance of Estimated PSD vs. N');
3
Mean and variance don't go to zero but remains constant after large values of N , thus showing its not a
consistent estimator
4
44
Give N observations determine CRB for
amplitude frequency
A sin 21Tf k te Cr
Y Cr
D CA f
distribution
Given noise follows gaussian
114,0
pigment 1 13 4
hog Likelihood for
Lfo atte
y Elm
EIagct.nl
Zig
I Sco y I
00
Aif
mygt E.ly
l
food
N
Yen Layer sinkitfok
Ém
7
2Ejyer sink't Jok
n.tn
IIgi2kittn
HALEY
Ck Sin ftp.k 5j 2Asm2f2itfokl
A ÉÉ y a Sinezitfok
it
Ilo E
Ig Etsikittok
CCRB_for Amplitude
n
É
Similarly
min J d Asian f 6
ÉyCr
fo Joy 2 XRCoskafok
E 41IEEIKcosaf.ie
Ilo
I
of
CHILI gaff
K o
various
Effects.nl
var logo
Tiffins't't Jok
IVmlontvonfoj.FI
4b BLUE for 02
MB ive ly Cr m
In
M O
we
I É ly Cn
02 This estimator is
the properties of
unbiased for 022 given BLUE
they are also
sample variance
using Transformed
data
obunatins
Consider taking Logarithm of squared
FCK to linearize the problem
Infy CK Z k
É
E
Ch
I
1293 272
T
I É
Tere emp 25 53
domain the problem
In this transformed
becomes linear in h 1027
however estimates is already lineage
unbiased there is 1s
the sample variance
datataftion
described above is the
as
estimates as
optimal choice