S.A.
ENGINEERING COLLEGE
(AN ISO 9001:2008 Certified, NBA Accredited Institution)
Approved By AICTE & Affiliated to Anna University
QUESTION BANK
Subject Code: AP9211
Subject Name: Advanced Digital Signal
Processing
Submitted By: M.Vanitha lakshmi
Department : PG Studies
Signature of the HOD Signature of the PRINCIPAL
UNIT I
DISCRETE RANDOM SIGNAL PROCESSING
Part A (Ist half)
1. Calculate the mean and variance for the autocorrelation function of random signals.(AU)
2. State Parsevals theorem. (AU)
3. What is random process?
4. State Wiener-Khinchine relationship. (AU)
5. Define variance
6. What is an autocorrelation function? (AU)
7. Derive the relationship between autocorrelation and auto covariance of a RP.
8. A zero mean mean white noise v(n) has a variance σv2. Determine its power density spectrum.
9. Mention any two properties of a WSS random process. (AU)
10. State any two properties of auto correlation. (AU)
11. What is Hermitian Toeplitz matrix?
12. What is jointly WSS?
13. Define bias ,asymptotically unbiased and consistency . (AU)
14. State Wold decomposition theorem.
15. Obtain the autocorrelation responding to a PSD Px(ejw)=5 + 2 cosw(AU)
16. State spectral factorization theorem. (AU)
Part A (IInd half)
17. Define Signal modeling.
18. Write the pade approximation equation for the coefficients ap(k).
19. What is limitation with pade approximation method and how it is over comed?
20. Write the minimum error equation obtained in Prony’s method.
21. Write the expression for leastsquare minimization of error using iterative prefiltering.
22. Define normal equations using autocorrelation method and covariance method.
23. Write the equation for covariance modeling error.
24. Write the minimum error expression using autocorrelation method.
25. State the orthogonality Principle.
26. State predicatable process.
27. What is regular process?
28. Differentiate between power density spectrum and cross power density spectrum. (AU)
29. What is innovation process?
30. What is innovations representation of the process?
31. Define and distinguish AR,MA,ARMA processes.
Part B (Ist half)
1. Obtain the filter to generate a random process with a power spectrum
Px(w)=(5+4cosw)/(10+6cosw) from white noise. (AU)
2. Determine the power spectra for a random process generated by x(n)=w(n)-x(n-4).where
W(n) is the white noise process with variance σ2.
3. The power spectrum ofa wide sense statinary process x(n) is given as
Px(w)=(25-24cosw)/(26-10cosw).Find the whitening filter H(z) that produces unit variance
white noise when the input is x(n).
4. State and prove Wiener Khinchine relation and Parseval’s theorem.(AU)
5. State the properties of autocorrelation and power spectrum and explain. (AU)
6. Find the autocorrelation Rxx given x(n) = {10,12,15,17} and cross correlation Rxy given
y(n) ={1,0.5,0,0.5}(AU)
7. A harmonic process x(n) is described by the equation x(n)=Asin(nw+Φ) Where A and w are
fixed constants and Φ is a uniformly distributed random variable over the interval –π to +π.
Determine its mean variance and auto correlation. (AU)
8. Consider a linear shift invariant system having system function
that is excited by zero mean exponentially correlated noise x(n) with an
autocorrelation sequence rx(k)=(1/2)|k|. If x(n) be the output process. (AU)
(i) Find the power spectrum,Py(z) of y(n).
(ii) Find the autocorrelation sequence, ry(k) of y(n).
9. If the output random process y(n) is obtained by filtering WSS RP x(n) with a stable LSI filter
h(n) . Write down the relation between y(n) and x(n) in terms of auto correlation .Given that
autocorrelation of x(n) is rx(k). (AU)
Part B (IIndhalf)
10. Explain the following parametric model equations: 1)ARMA 2) AR. (AU)
11. Obtain the Yule Walker equations for ARMA processes.
12. State and prove spectral factorization theorem(AU)
13. Obtain the Yule Walker equations for AR and MA processes.
14. Obtain the Yule Walker equations for MA processes.
15. The autocorrelation sequence of a discrete random signal is Rx(k) ={0.1,0.2,0.3,0.4}.Obtain a
third order AR model by solving Yule walker equations. Assume the modeling error as
0.1(AU)
16. Explain Pade approximation for modeling a signal.
17. Derive the expression for minimizing least square error using iterative prefiltering.
18. Derive all pole modeling equation using covariance method.
UNIT II
SPECTRAL ESTIMATION
Part A (Ist half)
1. Compare parametric and non-parametric methods of spectral estimation. (AU)
2. Find the PSD given x(n) = {1,3}(AU)
3. What are the demerits of the periodiogram? (AU)
4. Mention the non parametric method of parameter estimation. (AU)
5. What is periodogram?
6. Define modified periodogram.
7. Mention the differences between periodogram and modified periodogram
8. What is periodogram averaging?(AU)
9. What is averaging modified periodogram?
10. What is the bias resolution and variance in Barlett’s method?
11. Compare Welch and Barlett method.
12. How are blackaman and Tukey methods used in periodogram averaging?
13. Barlett’s method is used to estimate the power spectrum of a process from a sequence of
N=2000 samples. What is the minimum length L that may be used for each sequence if we
have resolution of Δf=0.005?
14. What are the disadvantages of non parametric methods of spectrum estimation?
15. How do parametric methods of spectral estimation overcome the limitations of non-
parametric methods?
16. What is an asymptotically unbiased estimator?
17. Define periodiogram? How can it be smoothed?
Part A (IInd half)
18. Name any one application of the AR model. (AU)
19. What is spectral line splitting?
20. What is the effect of model order on the resulting spectrum in AR method?
21. What are the main disadvantages of using linear methods for power Spectrum estimation?
22. What are the different methods in AR modeling?
23. Compare ARMA,MA,AR with respect to complexity.
24. Write the bias equation of modified periodogram.
25. What are the criterias in selecting the model order?
26. Write the expression for power spectrum estimates of AR and MA models.
27. The autocorrelation of a wide sense stationary random process is rx(k)=2δ(k) +jδ(k-1)-
jδ(k+1). Find the power spectrum. (AU)
Part B (Ist half)
1. Explain the periodogram method of spectral estimation and evaluate the performance of the
periodogram. (AU)
2. With necessary derivation, explain the periodiogram averaging using Barlett’s method. (AU)
3. Explain Blackman and tukey method of power spectral estimation. (AU)
4. Explain Welch method of estimating the spectrum of signals. (AU)
5. In Welch method, calculate the variance of the Welch power spectrum estimate with the Barlett
window if there is 50% overlap.
6. Find the power spectrum for each of the following WSS random process that have the
autocorrelation sequence given by (AU)
(i) rx(k)=2δ(k) +jδ(k-1)-jδ(k-1).
(ii) rx(k)=δ(k) +2(0.5)|k|
7. Consider that Barlett’s method is used to estimate the power spectrum of a process from a
sequence of N=2000 samples.What is the minimum length L that may be used for each
sequence to have resolution of ∆f=0.005? And determine its quality factor.(AU)
8. For the autocorrelation sequence of an MA(1) process
Rx(k)=17δ(k) +4δ(k-1)+4δ(k+1).Find the power spectrum and and an FIR filter that generate
x(n).
9. Explain various pitfalls found in spectral analysis.
Part B (IIndhalf)
10. Explain how power spectrum can be estimated from the AR Model. (AU)
11. Briefly describe the Yule walker method of spectrum estimation.
12. Discuss the MA & ARMA techniques of spectrum estimation
13. Explain ARMA, MA, AR models . (AU)
UNIT –III
LINEAR ESTIMATION AND PREDICTION
Part A (Ist half)
1. What is linear prediction?(AU)
2. What is meant by prediction? (AU)
3. Draw the structure of the forward prediction error filter.(AU)
4. What is meant by backward prediction error?
5. What is Levinson order update equation?
6. What is Levinson Durbin recursion?
7. What is orthogonality principle? (AU)
8. What is a whitening filter? (AU)
9. State the least mean square error criterion.(AU)
10. Define mathematically prediction error filter.(AU)
11. What is use of a Wiener smoothening filter?
12. Write the minimum error equation obtained in Prony’s method.
13. What are the disadvantages in Least squares method?
14. What are the problems in Wiener filtering?
15. What is filtering?
16. What is smoothing?
17. What is deconvolution?
18. What is Pole zero modeling?
Part A (IInd half)
19. What is Lattice structure?What is the advantage of such structure.(AU)
20. Write down the system function H(z) for a causal wiener filter.(AU)
21. Write down the expression for Kalman gain.
22. What is FIR Wiener filter?
23. What is IIR Wiener filter?
24. Mention the advantages and applications of discrete kalman filter.(AU)
25. What is Wiener-Hopf equations?
26. How to process non-stationary process?
27. Why do we go for designing Wiener filter using correction factor?
28. Bring out the expression for inverting a Toeplitz marix in Levinson recursion.
29. How Wiener filter can be modified as a linear predictor?
30. What is the importance of Linear prediction in signal processing?
31. Give few applications of Kalman filter
32. Give few applications of Wiener filter.
Part B (Ist half)
1. Brief forward and backward error prediction.(AU)
2. Discuss prony’s method of linear prediction.(AU)
3. Explain the Levinson Durbin algorithm for computing the prediction error filter co-efficient
and prediction error power.
4. Derive the Wiener Hopf equations and the minimum mean squared error.(AU)
5. With suitable model and assumptions, establish that Wiener filter are designed based on the
criterion of minimizing the mean square error.
6. Explain Levinson-Durbin algorithm for computing the prediction error filter coefficient and
prediction error power.
7. Find a third order all pole model for a signal having autocorrelation values, rx(0)=1,
rx(1)=0.5, rx(2)=0.5, rx(3)=0.25 using Levinson Durbin recursion.
Part B (IIndhalf)
8. Outline the procedure of deign of a causal IIR wiener filter that produces MMSE estimate of
x(n).(AU)
9. A random process x(n) is generated as follows,x(n)=αx(n-1)+v(n)+βv(n-1)
where v(n) is white noise with mean mv and variance σv2.Design a first
order linear predictor,x(n+1)=w(0)x(n)+w(1)x(n-1) which minimizes the mean square error in
the prediction of x(n+1) and find the MMSE.(AU)
10. Staring from the basic principles,derive the expression for minimum error for a FIR Wiener
filter interms of autocorrelation matrix Rx and cross correlation vector rdx .
11. Explain the steps in the design of an FIR wiener filter that produces the minimum mean square
estimate of the given process.(AU)
12. Explain Kalman filter.(AU)
13. Design a Wiener filter of length M=2 to estimate s(n) which is a AR(1) process given by
s(n)=0.8s(n-1)+v(n) where v(n) is a white noise sequence with variance 0.64 from a given
input signal x(n)= s(n)+w(n) where w(n) is a white noise sequence with unit variance.
14. Explain how the desirable kalman filter is different from wiener filter in estimation?(AU)
15. Explain the Yule walker equations can be solved Levinson Durbin algorithm.
16. Describe the different properties of linear prediction error filters.
17. Discuss the following types of linear estimation and prediction techniques(i) kalman filter(ii)
Lattice filter.
18. (i) How the Levison equation is used to derive the lattice filter structure for
FIR digital filters?
(ii) Use the levinson recursion to find the predictor polynomial corresponding to
autocorrelation sequence R=[2,1,1,-2]T.
UNIT –IV
ADAPTIVE FILTERS
Part A (Ist half)
1. What is need of adaptivity?(AU)
2. List some applications of Adaptive filters.(AU)
3. What do you mean by adaptive filters?
4. State the principle of steepest descent method.(AU)
5. What are the advantages of FIR adaptive filters?(AU)
6. What is the purpose of normalized LMS algorithm?(AU)
7. What are FIR systems?
8. Why are FIR filters widely used for adaptive filters?
9. Express the LMS adaptive algorithm.State its properties.
10. What are the classification based on the filter structure?
11. What are the steps in steepest descent algorithm?
12. What are the principles of LMS adaptive algorithm?(AU)
13. Explain the concept of adaptive filtering?
14. What are the properties of adaptive filter?
15. Distinguish the predictive filter and Prediction error filter.
16. What is the need for normalized LMS?
Part A (IInd half)
17. Give the principle of RLS adaptive filter.(AU)
18. How will avoid echos in long distance telephone circuits?(AU)
19. What do you mean by adaptive channel equalization?(AU)
20. What is adaptive noise cancellation?
21. Why LMS is preferred over RLS?
22. What is the relationship beween the order of the filter with the step size in LMS adaptive filter?
23. Compare the adaptive noise cancellation and adaptive echo cancellation.
24. Draw the block diagram of an adaptive filter with proper labeling.
25. What is the basic difference between LMS & RLS filter?(AU)
26. What is adaptive echo cancellation?
27. State Widrow Hopf equations.
28. Bring out the differences between FIR and IIR.
29. What is causal and non-causal filter?
30. Why FIR filters are used in adaptive filter application?
Part B (Ist half)
1. Explain in detail the LMS adaptive algorithm for FIR filters with an applications.(AU)
2. Derive the Widrow Hopf equations for LMS.
3. Explain how an adaptive linear predictor can be designed using the least mean square
algorithm with a suitable example.(AU)
4. What is the practical limitation of steepest descent adaptive filter and how it is overcome in
LMS algoithm?Discuss about the convergence of LMS algorithm and normalized LMS.
5. Show that the normalised LMS algorithm is equivalent for using the update equation
Wn+1=Wn+µe’(n)x*(n) where e’(n) is the error at time n that is based on the new filter
coefficients Wn+1,e’(n) = d(n) – WTn+1x(n).Discuss the relationship between µ and ε in the
normalized LMS algorithm.
6. Explain the steepest descent method and hence derive LMS algorithm.(AU)
7. Explain normalized LMS algorithm.(AU)
Part B (IIndhalf)
8. Explain adaptive echo cancellation.(AU)
9. Explain adaptive noise cancellation.(AU)
10. Compare LMS algorithm with RLS adaptive algorithm.(AU)
11. Explain Channel Equalization.
12. With the aid of a neat schematic diagram describe how the problem of adaptive channel
equalization can be solved.(AU)
13. Discuss the exponentially weighted RLS and sliding window RLS.(AU)
14. Explain the RLS adaptive algorithm.
15. With suitable model and assumptions, establish that wiener filters are designed based on the
criterion of minimizing the mean square error.
16. Write a detailed account on any two applications of adaptive filters.
UNIT V
MULTIRATE DSP
Part A (Ist half)
1. What is need for multiple signal processing?(AU)
2. Give the mathematical description of decimation.(AU)
3. What is the effect on power spectrum due to up sampling and down sampling?(AU)
4. What is the need for anti-imaging filter in multirate DSP?(AU)
5. Give an example each for up sampling and down sampling processes with relevant example.
(AU)
6. What is interpolation?
7. What is decimation?
8. What is the technique used in image compression using mutirate system?
9. Find the transform representation of expander.
10. Prove that decimator and expander are time variant.
11. List the applications of mutirate signal processing.(AU)
12. Give an example for a polyphase filter structure.(AU)
13. What polyphase filters are named so?
14. Draw any one of the noble identity of an interpolator.
15. What is sampling?
16. What is sampling rate conversion?
17. What is quantization?
18. The signal x(n)= an u(n) is applied to a decimator that reduces the rate by a factor of 2.Find the
output spectrum.
19. What is the purpose of low pass filter before down sampler.
Part A (IInd half)
20. What is discrete wavelet transform?(AU)
21. Define wavelet transform.(AU)
22. Define a synthesis filter bank with a neat diagram.(AU)
23. Write down the application of wavelet transform in data compression.
24. Draw the multistage implementation of mutirate system.
25. What are quadrature mirror filters.
26. What is subband coding?(AU)
27. What are the applications of subband coding?
28. What is a daubechies wavelet?
29. What are the advantages of multistage implementation in multi rate signal processing?
30. Define sub band coding of speech.
Part B (Ist half)
1. Explain the process of decimation and interpolation with examples.(AU)
2. Explain the concept of multirate signal processing with spectral interpretation of decimation of
a signal from 6 KHZ to 2 KHZ and spectral interpretation of interpolation of a signal from
2KHZ to 6 KHZ.(AU)
3. Explain the realization of an FIR filter based on type I and type II polyphase decomposition.
(AU)
4. Explain the need for sampling rate conversion by a rational factor and how to achieve it.(AU)
5. Derive the frequency domain relation of input and output for decimator and explain with any
specific spectral shape
6. Enumerate the procedure involved in multistage implementation of a multi rate system.(AU)
7. Develop an expression for the output y(n) as a function of the input x(n) for the multi-rate
structure of fig:
5 20 4
x(n) y(n)
8. Explain the interpolation process in time domain and frequency domain.
Part B (IIndhalf)
9. Explain the relation between HAAR Wavelet function and scaling function.(AU)
10. Explain how to implement multistage filter banks by wavelet decomposition.(AU)
11. Explain filter bank implementation of wavelet expansion of signals
12. Write a detailed note on wavelet transform
13. Explain sub band coding and its applications.(AU)
14. Discuss how speech compression can be achieved using subband coding.(AU)
15. Explain the applications of mutirate dsp.
16. What do you mean by multi resolution analysis and explain how wavelet transform can be used
for this purpose.(AU)