ADDIS ABABA UNIVERSITY
Addis Ababa Institute of Technology
School of Electrical and Computer Engineering Department
Computer Assignment -I
Statistical Digital Signal Processing
Prepared By: Hailemichael Guadie Mengesitu
ID: GSE/1090/08
Due Date: April 18, 2016
Submitted to: Dr. Eneyew Adugna
Page 1
Solution for question #1
% generating` 2000 Gaussian random numbers with means 1 and 1.5 and standard deviation
1 and 0.5
x=1+1.0*randn(1000,1);%first half of the samples
y=1.5+.5*randn(1000,1);%second half of the samples
t=[x;y];%concatenating the 2 vectors to have a sample comprised of 2000
elements
%r= random number
%z=m+r*sd
plot (t);
xlabel ('n');
ylabel ('Xn');
title('Random process');
x=1+randn(1000,1);
subplot(2,2,1)
hist(x);% histogram superimposed with theortical
hold on
xlabel ('x');
ylabel ('N(x)');
title('mean=1.0 and statdard deviation =1.0 ');
Page 2
subplot(2,2,2)
histfit(x)
title('theoretical Guassian distribution')
grid on
hold off
% plotting the histogram of the second sample with mean 1.5 and variance 0.5 super
imposed with the theoretical Gaussian distribution
y=1.5+0.5*randn(1000,1);
subplot(2,2,1)
hist(y)%hist function finds the min. and max.the finds the center
hold on
xlabel ('x');
ylabel ('N(x)');
title('mean=1.5 and statdard deviation =0.5 ');
subplot(2,2,2)
histfit(y)% histogram superimposed with theortical
title('theoretical Guassian distribution')
legend('histogram','theoretical...')
grid on
hold off
Page 3
% answer for question # 2 part 1
with lambda =0.995 and comparison of theoretical and estimated values
% generating gaussian random numbers with means 1 and 1.5 and standard deviation 1 and
0.5
x=1+randn(1000,1);%1st half of the histogram
y=1.5+.5*randn(1000,1);%second half the histogram
t=[x;y];
% concatinating the 2 vectors to have a sample comprised of 2000 elements
mr= zeros(2000,1); % mr which is use to hold the mean of the sample from time average
approximation
sr= zeros(2000,1);
% sr which is use to hold the variance of the sample from time average
approximation
ml=zeros(2000,1);
% ml which is use to hold the mean of the sample from least squares
approximation
sl=zeros(2000,1);
% sl which is use to hold the variance of the sample from least
squares approximation
mr(1)=t(1);
ml(1)=t(1);
% tracking mean using time average approximation
for i=2:2000;
k=i-1;
mr(i)=mr(k)+(t(i)-mr(k))/(i+1);
end
% tracking variance using time average approximation
for i=2:2000;
k=i-1;
Page 4
sr(i)=sr(k)+((t(i))^2-sr(k))/(i+1);
end
% lambda is 0.995
l=.995;
% tracking mean using least square approximation
for i=2:2000;
k=i-1;
ml(i)=ml(k)+(1-l)*(t(i)-ml(k));
end
% tracking variance using least square approximation
for i=2:2000;
k=i-1;
sl(i)=sl(k)+(1-l)*((t(i))^2-sl(k));
end
% ploting variance from least squares and time average approximations
plot(sl);
hold on
plot(sr,':');
legend ('least squares','time averages',4)
xlabel ('n');
ylabel('sn')
title ('lambda=0.995');
hold off
Page 5
Comment
%The estimates converge to the theoretical value. But due to noise and other interference
factors we see deviations from the real theoretical value.
%Question #3 repeat part2 for lambda=0.99 and lambda=0.98
% generating gaussian random numbers with means 1 and 1.5 and standard deviation 1 and
0.5
x=1+randn(1000,1);
y=1.5+.5*randn(1000,1);
t=[x;y];
% concatinating the 2 vectors to have a sample comprised of 2000
elements
mr= zeros(2000,1);
% mr which is use to hold the mean of the sample from time average
approximation
sr= zeros(2000,1);
% sr which is use to hold the variance of the sample from time
average approximation
ml=zeros(2000,1);
% ml which is use to hold the mean of the sample from least squares
approximation
sl=zeros(2000,1);
% sl which is use to hold the variance of the sample from least
squares approximation
mr(1)=t(1);
Page 6
ml(1)=t(1);
% tracking mean using time average approximation
for i=2:2000;
k=i-1;
mr(i)=mr(k)+(t(i)-mr(k))/(i+1);
end
% tracking variance using time average approximation
for i=2:2000;
k=i-1;
sr(i)=sr(k)+((t(i))^2-sr(k))/(i+1);
end
% lambda is 0.98
l=.98;
% tracking mean using least square approximation
for i=2:2000;
k=i-1;
ml(i)=ml(k)+(1-l)*(t(i)-ml(k));
end
% tracking variance using least square approximation
for i=2:2000;
k=i-1;
sl(i)=sl(k)+(1-l)*((t(i))^2-sl(k));
end
% ploting variance from least squares and time average approximations
plot(sl);
hold on
plot(sr,':');
legend ('least squares','time averages',4)
xlabel ('n');
ylabel('sn')
title ('lambda=0.99');
hold off
Page 7
% ploting mean from least squares and time average approximations
plot (ml);
hold on
plot(mr,':');
legend ('least squares','time averages',4)
xlabel ('n');
ylabel('mn')
title ('lambda=0.98');
axis ([0 2000 0 2])
axis fill;
axis square;
hold off
Page 8
Page 9
3.3 (a)
%generating 1000 samples of zero mean unit variance
% white Gaussian noise.
wgn=randn(1,1000);
(b)
%sample autocorrelation for 100 lags
[Rww,lags]=xcorr(wgn,50,'biased');
subplot(2,2,1)
stem(lags,Rww);
xlabel('time k')
ylabel('Rww')
title('The sample autocorrelation for WGN of length 1,000')
(c)
% sample autocorrelation by segmenting wgn(n) into 10 different
%sequences.
%the segmetation process
n1=1:1:100;
wgn1=wgn(n1);
[Rwws1,lags]=xcorr(wgn1,50,'biased');
n2=101:1:200;
wgn2=wgn(n2);
[Rwws2,lags]=xcorr(wgn2,50,'biased');
n3=201:1:300;
Page 10
wgn3=wgn(n3);
[Rwws3,lags]=xcorr(wgn3,50,'biased');
n4=301:1:400;
wgn4=wgn(n4);
[Rwws4,lags]=xcorr(wgn4,50,'biased');
n5=401:1:500;
wgn5=wgn(n5);
[Rwws5,lags]=xcorr(wgn5,50,'biased');
n6=501:1:600;
wgn6=wgn(n6);
[Rwws6,lags]=xcorr(wgn6,50,'biased');
n7=601:1:700;
wgn7=wgn(n7);
[Rwws7,lags]=xcorr(wgn7,50,'biased');
n8=701:1:800;
wgn8=wgn(n8);
[Rwws8,lags]=xcorr(wgn8,50,'biased');
n9=801:1:900;
wgn9=wgn(n9);
[Rwws9,lags]=xcorr(wgn9,50,'biased');
n10=901:1:1000;
wgn10=wgn(n10);
[Rwws10,lags]=xcorr(wgn10,50,'biased');
Page 11
%the autocorrelation as a sum of sample autocorrelations of the 10 segments
Rwws=Rwws1+Rwws2+Rwws3+Rwws4+Rwws5+Rwws6+Rwws7+Rwws8+Rwws9+
Rwws10;
subplot(2,2,2)
stem(lags,Rwws)
xlabel('time k')
ylabel('Rwws')
title('The sample autocorrelation from 10 segments')
(d)
% autocorrelation of another WGN
%Generating 10,000 samples of zero mean white Gaussian noise
w=randn(1,10000);
[Rww1,lags]=xcorr(w,100,'biased');
subplot(2,2,3)
stem(lags,Rww1)
xlabel('time k')
ylabel('Rww1')
title('The sample autocorrelation for WGN of length 10,000')
Page 12
The resulting plots are:
The sample autocorrelation for WGN of length 1,000
The sample autocorrelation from 10 segments
1.2
10
8
6
0.6
Rwws
Rww
0.8
0.4
0.2
0
-0.2
-50
0
time k
-2
-50
50
0
time k
50
The sample autocorrelation for WGN of length 10,000
1.2
1
Rww1
0.8
0.6
0.4
0.2
0
-0.2
-50
0
time k
50
Observation: As we know the autocorrelation of a unit variance WGN is a Dirac
function, (k), with amplitude 1 at k=0 and zero at all other k values. But the plot in part
(b) has nonzero values at times other than k=0.
The plots in (b) and (c) differ only at k=0.
As we can see from part (d), the sample autocorrelation will become almost the same the
exact autocorrelation, (k), when the length of the sequence increases.
3.4 . Given an all-pole filter
( )
( )
( )
( )
(a)
%Generating AR(2)process x(n) for a(1)=0,a(2)=-0.81,b(0)=1
Page 13
a=[1 0 -0.81];
b=[1];
wgn=randn(1,24);
x=filter(b,a,wgn);
(b)
%Sample autocorrelation
[Rxx,lags]=xcorr(x,'biased');
subplot(2,2,1)
stem(lags,Rxx)
xlabel('time k')
ylabel('Rxx')
title('Sample autocorrelation of x(n)')
(c)
%Power spectrum of x(n)
Xw=fft(x);
subplot(2,2,2)
stem(Xw);
xlabel('frequency')
ylabel('Xw')
title ('Power spectrum of x(n)')
(d) The Yule-Walker equations used to estimate a(1), a(2), b(0),
( )
( )
( )
( )
( )
( )
( )
( ) ] [ ( )]=[
( )
( )
( )
]
(1)
Page 14
Solving for a(1) and a(2), we have
( )
( )
( )
( )
][
]
( )
( )
( )
]
( )
(2)
From equation (2), I have calculated a(1) and a(2). Then b(0) is calculated from equation
(1) as
( )
( )
( )
( )
( )
( )
%Estimating the filter parameters a(1),a(2),b(0):
A=[Rxx(1) Rxx(2);Rxx(2) Rxx(1)];
b=[-Rxx(2) -Rxx(3)]';
a=[inv(A)*b]'
b0=sqrt(Rxx(1)+a(1)*Rxx(2)+a(2)*Rxx(3))
(e)
%Power spectrum of x(n):
w=1:1:24;
Pxw=(abs(b0))^2./(abs(1+a(1).*exp(-j*w*pi)+a(2).*exp(-2*j*pi*w))).^2;
subplot(2,2,3)
stem(Pxw)
xlabel('frequency')
ylabel('Pxw')
title('Power spectrum of x(n) from a(k) and b(0)')
The resulting plots and calculated filter parameters are:
Filter parameters, part(d)
a = [ 1.4201 -0.5450]
b0 = 0.7237
Page 15
Sample autocorrelation of x(n)
Power spectrum of x(n)
1.5
6
4
Xw
Rxx
2
0.5
0
-2
0
-4
-0.5
-30
-20
-10
0
time k
10
20
30
-6
10
15
frequency
20
25
Power spectrum of x(n) from a(k) and b(0)
0.7
0.6
Pxw
0.5
0.4
0.3
0.2
0.1
0
10
15
frequency
20
25
Observation: The estimated filter parameters differ from the given one, part (d).
The power spectrums in part (c) and (e) differ significantly. In the formula used to
calculate the power spectrum in part (e), both the numerator and denominator are
magnitude squares. Hence the power spectrum is positive over the entire frequency
range.
3.5 Problem: To compare periodogram method and the Yule-Walker method of
computing spectrum estimates for a first order autoregressive model. The signal
model for a sequence yn is
yn = ayn-1 +n
Hence the signal model in z domain is
H(z) =1/(1 -a1z-1)
Matlab Code
y=[2.583, 2.617, 2.289, 2.783, 2.862, 3.345, 2.704, 1.527, 2.096, 2.050, 2.314,0.438, 1.276,
0.524, -0.449, -1.736, -2.599, -1.633, 1.096, 0.348, 0.745,0.797, 1.123, 1.031, -0.219, 0.593,
2.855, 0.890, 0.970, 0.924];
Page 16
a=0.8
esig=1
for i=1:29
m1(i)=y(i)*y(i+1);
end
for i=1:30
n1(i)=y(i)^2;
end
m=sum(m1(1:29));
n=sum(n1(1:30));
ahat=m/n
%the estimate of a
sighat=(1-ahat^2)*n/30
%the estimate of esig
w=0:pi;
w1=w./pi;
sth=1./((abs(1-0.8*exp(-j*w1))).^2);
sth1=10*log10(sth);
plot(w1,sth1,':')
hold on
syw=sighat./((abs(1-ahat*exp(-j*w1))).^2);
syw1=10*log10(syw);
plot(w1,syw1,'-')
hold on
w=0:pi;
w1=w./pi;
sw2=0;
for k=1:30
sw1=y(k)*exp(-j*w1*k);
Page 17
sw2=sw2+sw1;
end
sper=((abs(sw1)).^2)/30;
sper1=10*log10(sper);
plot(w1,sper1,'--')
xlabel('digital frequency win units of pi')
ylabel('dB')
title('Yule Walker vs. Periodogram Spectra')
legend('sth','syw','sper')
Simulated result
a =0.8000
esig =1
ahat =0.8060
sighat =1.1699
Observation the simulated period gram spectrum estimator is different from the given graph
due to windowing mismatch
Page 18