Name: Phan Truong Buu
ID: EEEEIU12009
SPECIAL TOPIC OF ELECTRICAL ENGINEERING
MIDTERM PROJECT
Problem 1:
X is a random variable uniformly distributed between -10 and 10. Thus the pdf of X is:
1
f ( x )= 20 for 10 x 10
0 for x <10x >10
The expectation of X:
+
=E ( X )= xf ( x ) dx
+10
x
dx=0
10 20
=E ( X )=
The variance of X:
2X =Var ( X )=E [ ( X )2 ] =E ( X 2 ) 2
+
+10
2X =Var ( X )= x 2 f ( x ) dx=
Matlab program:
10
x2
100
dx=
20
3
rand('state',sum(clock));%set a seed for random variable
a = -10; b = 10; %Range of our random variable
x = a + (b-a).*rand(100000,1);
disp('Mean of X :');
mean(x)
disp('Variance of X:');
var(x)
Result:
Mean of X :
ans =
0.0052
Variance of X:
ans =
33.4786
Discussion:
The uniform distribution has its mean equal to the average of the two limits.
The Matlab program, which generates a set of random number which are uniformly distributed,
the result for mean and variance is the closely the same to our calculation above.
Problem 2:
The following Matlab program produces the histogram of the random signal which each sample
is uniformly distributed:
rand('state',sum(clock));%set a seed for random variable
N = 1000000;
bin = 500;
a = -10; b = 10; %Range of our random variable
x = a + (b-a).*rand(N+1,1);
[a,b]=hist(x,bin);
figure(1)
hist(x,bin);
title('Histogram of our generated signal')
xlabel('x');
ylabel('Number of occurance of x');
f1 = a/(length(x)*(b(2)-b(1)));
figure(2);
stem(b,f1,'x');
title('Pdf of our generated signal')
xlabel('x');
ylabel('f(x)');
Result:
Figure 1.Histogram of our generated signal
Figure 2.Pdf of our generated signal
Discussion:
The pdf of our signal has the form of uniformly distribution.
Changing the number of bins in the histogram will lead to a change of occurrence of our
histogram plot (the magnitude); however, the pdf will be the same (if number of bins is large
enough).
The small number of bins will lead to false plot of our pdf. For example, the below plot show
bins = 10.
Figure 3.Histogram with 10 bins
Figure 4. Pdf with 4 bins
As we can see, with lesser bins, the information of our signal is missed due to the lack of
resolution on the histogram.
Problem 3:
For a signal x(n), the power of the signal is defined by:
1
P( x ( n) )=
N
N 1
x2 (n)
0
With x(n) as a random signal following uniform distribution, we get:
P ( x ( n ) ) =E ( x 2 ( n ))
In the case of Problem 1 and 2, we can obtiain:
P( x (n))=
100
3
This could be understood as:
( x (n ))
E
In problem 2, our generated discrete signal has each of the element x (k) independent. This
means that the autocorrelation of x (n) is a form of Dirac delta function. In other words, we have:
cov ( x (i), x ( j) ) = (i= j )
0(i j)
r xx ( k )=E ( x ( m )x ( mk ) )= 2 (k )
Also, based on the Wiener-Khintchin relations, we have:
+
S ( e j )= r xx ( k ) e jk
With this in mind, the power spectrum density of our function should be nearly flat from 0 to 2.
The spectrum is periodic with period is 2 and symmetric around = .
We know that since the signal is discrete, our signal is like a sampled from a noisy signal with
each element is uniformly distributed, thus the result should be different from the ideal case.
However, we hope that the average of them will be equal to 100/3.
The following program generates the PSD of our signal:
%%%%%
% Generate a random signal using
N=1000000;
a = -10; b = 10; %Range of our random variable
x = a + (b-a).*rand(1,N);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%The following code will compute the PSD
%%Based on the DFT of our signal
psd=zeros(1,L);
for k=0:floor(N/L)-1
psdk=(1/L)*abs(fft(x(1+k*L:(k+1)*L))).^2;
psd=psd+psdk;
end
psd1=psd/(k+1);
dw=2*pi/L;
w=0:dw:2*pi-dw;
%%Plot the PSD from 0 to 2i
plot(w(1:L),psd1(1:L))
title('PSD of x(n)');
xlabel('omega');
ylabel('Power');
hold
%%Mean value of PSD
mean(psd1)
Figure 5.PSD of x(n)
The program returns mean value of the PSD as 33.3707, which is much closed to 100/3.
Problem 4+5:
The given system H(z):
H ( z )=
1+0.5 z1+0.25 z2
1+ z1+ z 2 +0.5 z3+ 0.25 z 4
From the transfer function, we obtain the relation in discrete time domain between y(n) and x(n):
y ( n ) + y ( n1 ) + y ( n2 )+ 0.5 y ( n3 ) +0.25 y ( n4 )=x ( n ) +0.5 x ( n1 ) +0.25 x (n2)
First observation: we can see that from signal x(n) whose elements are independent from each
other, the output signal y(n) has its elements depend on each other.
The following Matlab program produces y(n):
%%%%%
% Generate a random signal using
% the filtering t3echnique
num=[ 1 .5 .25];den=[1 1 1 .5 .25];
N=1000000;L=128;
a = -10; b = 10; %Range of our random variable
x = a + (b-a).*rand(1,N);
%y=randn(1,N);
y=filter(num,den,x);
figure(1)
hist(y,500);
title('Histogram of y')
[a,b] = hist(y,500);
f = a/(length(y)*(b(2)-b(1)));
figure(2)
plot(b,f);
title('Estimated PDF of y(n)')
Figure 6. Histogram of y(n)
Figure 7. Estimated pdf of y(n)
Observation: from x(n) which is uniformly distributed, our output y(n) has its estimated pdf
looks like a Gaussian distribution.
Explain:
This process is called ARMA process, which is a stationary process.
By intuition, the explanation for this is given by the Central Limit Theorem.
In details, according to Central Limit Theorem for stationary process which states that for a
stationary process which input is a white noise, then the output is:
Y N N ( , 2)
Mean of our output signal by taking expectation of input output relation above:
E [ y ( n )+ y ( n1 ) + y ( n2 ) +0.5 y ( n3 )+ 0.25 y ( n4 ) ]=E [ x ( n ) +0.5 x ( n1 ) +0.25 x ( n2 ) ]
Then we get:
=E [ y ( n ) ]=0
The variance of the output, in this case can be approximated by using the Matlab function:
var(y), which results 46.0833. In theory, the variance of the ARMA process is calculated by:
y2=Co v y ( 1 )Co v y ( 2 )0.25 Co v y (3 )0.5 Co v y ( 4 )+ ( 1+ 0.5 H (1)+0.25 H (2) ) x 2
Then the result will be:
y2 46
So y(n) is expected to be normally distributed in N(0,46).
The following code produce the pdf of y(n) from theory and experiment.
num=[ 1 .5 .25];den=[1 1 1 .5 .25];
N=1000000;L=128;
a = -10; b = 10; %Range of our random variable
x = a + (b-a).*rand(1,N);
%y=randn(1,N);
y=filter(num,den,x);
[a,b] = hist(y,500);
f = a/(length(y)*(b(2)-b(1)));
figure(1)
plot(b,f);
mean = 0;
std_y = std(y);
c = [-20:0.01:20];
f1 = 1/(sqrt(2*pi*std_y^2))*exp(-c.^2/(2*std_y^2));
hold on
plot(c,f1,'r')
xlabel('y(n)');
ylabel('f(y)');
title('PDF of y(n) from theory and experiment');
Figure 8. PDF of y(n) from experiment and theory.
Problem 6+7:
Using the below code, we obtain the PSD of y(n):
num=[ 1 .5 .25];den=[1 1 1 .5 .25];
N=1000000;L=128;
a = -10; b = 10; %Range of our random variable
x = a + (b-a).*rand(1,N);
%y=randn(1,N);
y=filter(num,den,x);
%%PSD of y
psd=zeros(1,L);
for k=0:floor(N/L)-1
psdk=(1/L)*abs(fft(y(1+k*L:(k+1)*L))).^2;
psd=psd+psdk;
end
psd1=psd/(k+1);
dw=2*pi/L;
w=0:dw:2*pi-dw;
plot(w(1:L),psd1(1:L))
xlabel('omega');
title('PSD of y(n)')
hold
Figure 9. PSD of y(n)
For more discussion in this section, we first plot
num=[ 1 .5 .25];den=[1 1 1 .5 .25];
[H,W] = freqz(num,den,512,'whole');
psd2=abs(H).^2;
plot(W,psd2,'r')
|H ( e j )|
, using the following Matlab code:
Figure 10. Power transfer function
|H ( e j )|
We first see that the PSD of y(n) is a magnified version of the above graph. Theoretically, we
have:
100
1+ 0.5 e j +0.25 e2 j
S y ( j )= |H ( e )| =
3 1+ e j+ e2 j +0.5 e3 j + 0.25 e4 j
2
Plot this and the experimental PSD on the same graph:
num=[ 1 .5 .25];den=[1 1 1 .5 .25];
N=1000000;L=128;
a = -10; b = 10; %Range of our random variable
x = a + (b-a).*rand(1,N);
%y=randn(1,N);
y=filter(num,den,x);
%%PSD of y
psd=zeros(1,L);
for k=0:floor(N/L)-1
psdk=(1/L)*abs(fft(y(1+k*L:(k+1)*L))).^2;
psd=psd+psdk;
end
psd1=psd/(k+1);
dw=2*pi/L;
w=0:dw:2*pi-dw;
plot(w(1:L),psd1(1:L))
xlabel('omega');
title('PSD of y(n) from theory and experiment')
hold on
w = [0:0.01:2*pi];
z = exp(-j*w);
H = (1+0.5*z+0.25*z.^2)./(1+z+z.^2+0.5*z.^3+0.25*z.^4);
H_power = (abs(H)).^2;
PSD_y = 100*H_power/3;
plot(w,PSD_y,'r');
hold on;
Figure 11. Result of PSD from experiment and theory