Modern Spectral
Analysis
NASSAKA DONANTOUSE J
NSHIMIYIMANA RODGERS
INGABIRE ANNITAH
Introduction to Modern Spectral Analysis
Modern spectral analysis is a technique used to analyze the frequency content of signals and data
using advanced mathematical algorithms and digital signal processing methods.
It involves the use of advanced mathematical algorithms and digital signal processing methods.
Modern approaches to spectral analysis are designed to overcome some of the distortions
produced by the classical approach. These are particularly effective if the data segments are
short. More importantly, they provide some control over the kind of spectrum they produce.
Modern techniques fall into two broad classes: parametric or model-based,and nonparametric.
Overview of Modern Spectral Analysis Methods
Modern techniques fall into two broad classes: parametric or model-based, and
nonparametric
Parametric methods: Based on modeling the signal (e.g., AR, MA models).
Nonparametric methods: Data-driven without assuming a signal model (e.g.,
Fourier Transform).
Parametric Methods
Parametric methods make use of an LTI model to estimate the power spectrum. This
model is the same as the filters described in the last chapter and is totally defined by
coefficients. The basic strategy of this approach as shown in the figure below;
Model Type/Model order
Three model types are commonly used in this approach, distinguished by the nature of
their transfer functions: autoregressive or AR models (having only a coefficients), moving
average or MA models (having only b coefficients), and autoregressive moving average or
ARMA models (having both a and b coefficients)
MA(Moving Average)
The MA model is the same as an FIR filter and has the same defining equation: repeated here
with a modification in a variable name:
where q is the number of b coefficients called the model order, x[n] is the input, and y[n] is
the output.
AR(Autoregressive)
The AR model has a Z-transform transfer function with only a constant in the numerator
and a polynomial in the denominator; hence, this model is sometimes referred to as an all-
pole model the AR model is particularly useful for estimating spectra that have sharp
peaks, in other words, the spectra of narrowband signals.
The time-domain equation of an AR model is similar to the IIR filter equation (Equation
4.31) but with only a single numerator coefficient, b[0], which is assumed to be 1:
Cont’d
where x[n] is the input or noise function and p is the model order.* Note that in
Equation 5.2, the output is the input after subtracting the convolution of the
model coefficients with past versions of the output (i.e., y[n − k]). This linear
process is not usually used as a filter, but it is just a variant of an IIR filter, one
with a constant numerator.
Cont’d
Graph explanation
The graphs illustrate some of the advantages and disadvantages of using AR analysis as a
spectral analysis tool. A test waveform is constructed consisting of a low-frequency
broadband signal, four sinusoids at 100, 240, 280, and 400 Hz, and white noise. A
classically derived spectrum (i.e., the Fourier transform) is shown without the added
noise in Figure 5.3a and with the noise in Figure 5.3b. The remaining plots show the
spectra obtained with an AR model of differing model orders. Figures 5.3c through 5.3e
show the importance of model order on the resultant spectrum. The use of the Yule–
Walker method with a relatively low-order model (p = 17) produces a smooth spectrum,
particularly in the low-frequency range, but the spectrum combines the two closely
spaced sinusoids (240 and 280 Hz), and does not show the 100-Hz component (Figure
5.3).
cont’d
The two higher-order models (p = 25 and 35) identify all of the sinusoidal components
with the highest-order model showing sharper peaks and a better defined peak at 100 Hz
(Figures 5.3d and 5.3e). However, the highest-order model (p = 35) also produces a less
accurate estimate of the low-frequency spectral feature, showing a number of low-
frequency peaks that are not present in the data. Such artifacts are termed spurious peaks;
they occur most often when higher model orders are used. In Figure 5.3f, the spectrum
produced by the covariance method is seen to be nearly identical to the one produced by
the Yule–Walker method with the same model order.
Yule–Walker Equations for the AR Model
To scale the power spectrum correctly, the spectrum is scaled by the variance, σ 2. To find the a
coefficients, we begin with the basic time-domain equation (Equation 5.2) and rearrange.
where x[n] is now white noise and a[0] = 1.
Cont’d
The next set of algebraic operations is used to put this equation into a format that uses the
autocorrelation function. Assuming real-valued signals, multiply both sides of this equation by
y[k − m] and take the expectation operation.
Note that the term on the left side of the equation, E[y[k − n] y[k − m]], is equal to the
autocorrelation of y with multiple lags [m − n]:
The right side of the equation, E[x[n] y[k − m]], is equal to zero, since x[n] is uncorrelated
because it is white noise. The second term, y[k − m], is just a delayed version of x[n], so the
expectation of the product of two uncorrelated variables is zero. So Equation 5.8 simplifies to
ARMA(Autoregressive moving average)
If the spectrum is likely to contain both sharp peaks and valleys, then a model
that combines both the AR and MA characteristics can be used. As might be
expected, the transfer function of an ARMA model contains both numerator and
denominator polynomials, so it is sometimes referred to as a pole-zero model.
The ARMA model equation is the same as an IIR filter (Equation 4.32):
Nonparametric Analysis: Eigenanalysis Frequency
Estimation
Eigenanalysis spectral methods are promoted as having better resolution and better
frequency estimation characteristics especially at high noise levels. The basic idea is to
separate correlated and uncorrelated signal components using a linear algebra
decomposition method termed singular value decomposition.
This so-called eigen decomposition approach is particularly effective in separating highly
correlated signal components such as sinusoidal, exponential, or other narrowband
processes from uncorrelated white noise, though it is not good at representing the spectra
of broadband signals.
Nonparametric Analysis Cont’d
When highly correlated signal components are involved (narrowband components), these
eigen methods can eliminate much of the uncorrelated noise contribution.
If the noise is not white (colored or broadband noise containing spectral features), then this
noise will have some correlation, and the separation provided by singular value
decomposition will be less effective. These approaches are not very good at separating
broadband processes from noise because of the reliance on signal correlation yet data from
broadband processes are not highly correlated.
cont’d
The key feature of eigenanalysis approaches is to divide the information contained in
the waveform into two subspaces:
● Signal (correlated) subspace.
● Noise (uncorrelated) subspace.
The eigen decomposition produces a transformed data set where the components,
which are called eigenvectors, are ordered with respect to their energy.
A component’s energy is measured in terms of its total variance, which is called the
component’s eigenvalue.
Cont’d
Thus, each component or eigenvector has an associated variance measure, its
eigenvalue, and these components are ranked ordered from highest to lowest
eigenvalue. More importantly, the transformed components are orthonormal
(orthogonal and scaled to a variance of 1.0).
This means that if the noise subspace components are eliminated from the spectral
analysis, then the influence of that noise is completely removed. The remaining
signal subspace components can then be analyzed with standard Fourier analysis.
The resulting spectra show sharp peaks where sinusoids or other highly correlated
(narrowband) processes exist.
Cont’d
Unlike parametric methods discussed above, these techniques are not considered
true power spectral estimators, since they do not preserve signal power, nor can
the autocorrelation sequence be reconstructed by applying the Fourier transform
to these estimators. Better termed frequency estimators, they function to detect
narrowband signals but in relative units.
The eigen decomposition needs multiple observations since it operates on a
matrix. If multiple observations of the signal are not available, the autocorrelation
matrix can be used for the decomposition.
Eigenvalue Decomposition Methods
The Pisarenko harmonic decomposition method
The multiple signal classification (MUSIC) algorithm approach, a related
method, which improves on the original algorithm but at the cost of great
computer time.
Both approaches are somewhat roundabout and actually estimate the spectrum of
the noise subspace, not the signal subspace.
Eigenvalue Decomposition Methods
We assume that after decomposition, the signal and noise subspaces are orthogonal so that
in the spectrum of the noise subspace will be small at frequencies where a signal is
present. The estimate of the noise subspace frequency spectrum is in the denominator, so
large peaks will occur at those frequencies where narrowband signals are present.
By placing the frequency estimation in the denominator, we are able to generate spectral
functions having very sharp peaks, which allow us to clearly define any narrowband
signals present.
The Equation of the MUSIC Narrow Band Estimator
Where;
M is the dimension of the eigenvectors, is the eigenvalue of the kth eigenvector, and is the
kth eigenvector usually calculated from the correlation matrix of the input signal. The
eigenvalues, , are ordered from the highest to the lowest power, so presumably the signal
subspace is the lower dimensions (i.e., lower values of k) and the noise subspace the higher.
The integer p is considered the dimension of the signal subspace (from 1 to p), so the
summation taken from p + 1 to M is over the noise subspace.
Equation Cont’d
The vector e(ω) consists of complex exponentials:
e(ω) = [(1, ejω, ej2ω, ej3ω,. . .,ej(M − 1)ω)]T
so the term in the denominator is just the Fourier transform of the eigenvalues,
and the equation reduces to;
Note: The denominator term is the summation of the scaled power spectra of
noise subspace components.
Cont’d
Since this frequency estimator operates by finding the signals, or really absence of signals,
in the noise subspace, the eigenvectors used in the sum correspond to all those in the
decomposed correlation matrix greater than p, presumably those in the noise subspace.
The Pisarenko frequency estimator was a precursor to the MUSIC algorithm and is an
abbreviated version of MUSIC in which only the first noise subspace dimension is used;
in mathematical terms, M = p + 1 in the above equation, so no summation is required.
The Pisarenko method is faster since only a single Fourier transform is calculated, but
considerably less stable for the same reason, only a single dimension is used to represent
the noise subspace.
Determining Signal Subspace and Noise Subspace
Dimensions.
If the number of narrowband processes is known, then the dimensions of the signal
subspace can be calculated using this knowledge: since each real sinusoid is the sum of two
complex exponentials, the signal subspace dimension should be twice the number of
sinusoids, or narrowband processes, present. In some applications, the signal subspace can
be determined by the size of the eigenvalues.
Here the eigenvalues (proportional to component energies) are plotted out.
THANK YOU