0% found this document useful (0 votes)
149 views8 pages

Numerical - Recipes 664 671

The document discusses Fourier and spectral analysis methods. It introduces several key applications of Fourier transforms that have revolutionized fields like astronomy, medical imaging, and seismology. These include convolution and deconvolution of data using the fast Fourier transform (FFT), correlation, optimal filtering, power spectrum estimation, and computation of Fourier integrals. It also discusses limitations of FFT methods and introduces alternatives like linear prediction and maximum entropy spectral estimation that allow for non-polynomial spectra.

Uploaded by

lynettelmx111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views8 pages

Numerical - Recipes 664 671

The document discusses Fourier and spectral analysis methods. It introduces several key applications of Fourier transforms that have revolutionized fields like astronomy, medical imaging, and seismology. These include convolution and deconvolution of data using the fast Fourier transform (FFT), correlation, optimal filtering, power spectrum estimation, and computation of Fourier integrals. It also discusses limitations of FFT methods and introduces alternatives like linear prediction and maximum entropy spectral estimation that allow for non-polynomial spectra.

Uploaded by

lynettelmx111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

!

“nr3” — 2007/5/1 — 20:53 — page 640 — #662

Fourier and Spectral CHAPTER 13


Applications

13.0 Introduction
Fourier methods have revolutionized fields of science and engineering, from
astronomy to medical imaging, from seismology to spectroscopy. In this chapter,
we present some of the basic applications of Fourier and spectral methods that have
made these revolutions possible.
Say the word “Fourier” to a numericist, and the response, as if by Pavlovian
conditioning, will likely be “FFT.” Indeed, the wide application of Fourier methods
must be credited principally to the existence of the fast Fourier transform. Better
mousetraps move over: If you speed up any nontrivial algorithm by a factor of a
million or so, the world will beat a path toward finding useful applications for it.
The most direct applications of the FFT are to the convolution or deconvolution of
data (!13.1), correlation and autocorrelation (!13.2), optimal filtering (!13.3), power
spectrum estimation (!13.4), and the computation of Fourier integrals (!13.9).
As important as they are, however, FFT methods are not the be-all and end-all of
spectral analysis. Section 13.5 is a brief introduction to the field of time-domain digi-
tal filters. In the spectral domain, one limitation of the FFT is that it always represents
a function’s Fourier transform as a polynomial in z D exp.2" if #/ (cf. equation
12.1.7). Sometimes, processes have spectra whose shapes are not well represented
by this form. An alternative form, which allows the spectrum to have poles in z, is
used in the techniques of linear prediction (!13.6) and maximum entropy spectral
estimation (!13.7).
Another significant limitation of all FFT methods is that they require the in-
put data to be sampled at evenly spaced intervals. For irregularly or incompletely
sampled data, other (albeit slower) methods are available, as discussed in !13.8.
So-called wavelet methods inhabit a representation of function space that is
neither in the temporal nor in the spectral domain, but rather somewhere in-between.
Section 13.10 is an introduction to this subject. Finally, !13.11 is an excursion into
the numerical use of the Fourier sampling theorem.

640

!
“nr3” — 2007/5/1 — 20:53 — page 641 — #663

13.1 Convolution and Deconvolution Using the FFT 641

s(t)
t

r (t)
t

r* s(t)
t

Figure 13.1.1. Example of the convolution of two functions. A signal s.t/ is convolved with a response
function r.t/. Since the response function is broader than some features in the original signal, these are
“washed out” in the convolution. In the absence of any additional noise, the process can be reversed by
deconvolution.

13.1 Convolution and Deconvolution


Using the FFT
We have defined the convolution of two functions for the continuous case in
equation (12.0.9), and have given the convolution theorem as equation (12.0.10).
The theorem says that the Fourier transform of the convolution of two functions is
equal to the product of their individual Fourier transforms. Now, we want to deal
with the discrete case. We will mention first the context in which convolution is a
useful procedure, and then discuss how to compute it efficiently using the FFT.
The convolution of two functions r.t/ and s.t /, denoted r ! s, is mathemati-
cally equal to their convolution in the opposite order, s ! r. Nevertheless, in most
applications the two functions have quite different meanings and characters. One of
the functions, say s, is typically a signal or data stream, which goes on indefinitely in
time (or in whatever the appropriate independent variable may be). The other func-
tion r is a “response function,” typically a peaked function that falls to zero in both
directions from its maximum. The effect of convolution is to smear the signal s.t /
in time according to the recipe provided by the response function r.t/, as shown in
Figure 13.1.1. In particular, a spike or delta-function of unit area in s which occurs
at some time t0 is supposed to be smeared into the shape of the response function
itself, but translated from time 0 to time t0 as r.t " t0 /.
In the discrete case, the signal s.t / is represented by its sampled values at equal
time intervals sj . The response function is also a discrete set of numbers rk , with the
following interpretation: r0 tells what multiple of the input signal in one channel (one
particular value of j ) is copied into the identical output channel (same value of j );
r1 tells what multiple of input signal in channel j is additionally copied into output
channel j C 1; r!1 tells the multiple that is copied into channel j " 1; and so on for
both positive and negative values of k in rk . Figure 13.1.2 illustrates the situation.
!
“nr3” — 2007/5/1 — 20:53 — page 642 — #664

642 Chapter 13. Fourier and Spectral Applications

sj
0 N−1

rk
0 N−1

(r* s)j
0 N−1

Figure 13.1.2. Convolution of discretely sampled functions. Note how the response function for negative
times is wrapped around and stored at the extreme right end of the array rk .

Example: A response function with r0 D 1 and all other rk ’s equal to zero


is just the identity filter. Convolution of a signal with this response function gives
identically the signal. Another example is the response function with r14 D 1:5 and
all other rk ’s equal to zero. This produces convolved output that is the input signal
multiplied by 1:5 and delayed by 14 sample intervals.
Evidently, we have just described in words the following definition of discrete
convolution with a response function of finite duration M :

M=2
X
.r ! s/j " sj !k rk (13.1.1)
kD!M=2C1

If a discrete response function is nonzero only in some range #M=2 < k $ M=2,
where M is a sufficiently large even integer, then the response function is called a
finite impulse response (FIR), and its duration is M . (Notice that we are defining M
as the number of nonzero values of rk ; these values span a time interval of M # 1
sampling times.) In most practical circumstances the case of finite M is the case of
interest, either because the response really has a finite duration, or because we choose
to truncate it at some point and approximate it by a finite-duration response function.
The discrete convolution theorem is this: If a signal sj is periodic with period
N , so that it is completely determined by the N values s0 ; : : : ; sN !1 , then its discrete
convolution with a response function of finite duration N is a member of the discrete
Fourier transform pair,

!
“nr3” — 2007/5/1 — 20:53 — page 643 — #665

13.1 Convolution and Deconvolution Using the FFT 643

N=2
X
sj !k rk ” Sn Rn (13.1.2)
kD!N=2C1

Here Sn .n D 0; : : : ; N ! 1/ is the discrete Fourier transform of the values sj .j D


0; : : : ; N ! 1/, while Rn .n D 0; : : : ; N ! 1/ is the discrete Fourier transform of
the values rk .k D 0; : : : ; N ! 1/. These values of rk are the same as for the range
k D !N=2 C 1; : : : ; N=2, but in wraparound order, exactly as was described at the
end of !12.2.

13.1.1 Treatment of End Effects by Zero Padding


The discrete convolution theorem presumes a set of two circumstances that are
not universal. First, it assumes that the input signal is periodic, whereas real data
often either go forever without repetition or else consist of one nonperiodic stretch
of finite length. Second, the convolution theorem takes the duration of the response
to be the same as the period of the data; they are both N . We need to work around
these two constraints.
The second is very straightforward. Almost always, one is interested in a
response function whose duration M is much shorter than the length of the data set
N . In this case, you simply extend the response function to length N by padding
it with zeros, i.e., define rk D 0 for M=2 " k " N=2 and also for !N=2 C
1 "" !M=2 C 1. Dealing with the first constraint is more challenging. Since
the convolution theorem rashly assumes that the data are periodic, it will falsely
“pollute” the first output channel .r # s/0 with some wrapped-around data from the
far end of the data stream sN !1 ; sN !2 , etc. (See Figure 13.1.3.) So, we need to set
up a buffer zone of zero-padded values at the end of the sj vector, in order to make
this pollution zero. How many zero values do we need in this buffer? Exactly as
many as the most negative index for which the response function is nonzero. For
example, if r!3 is nonzero while r!4 ; r!5 ; : : : are all zero, then we need three zero
pads at the end of the data: sN !3 D sN !2 D sN !1 D 0. These zeros will protect the
first output channel .r # s/0 from wraparound pollution. It should be obvious that the
second output channel .r # s/1 and subsequent ones will also be protected by these
same zeros. Let K denote the number of padding zeros, so that the last actual input
data point is sN !K!1 .
What now about pollution of the very last output channel? Since the data now
end with sN !K!1 , the last output channel of interest is .r # s/N !K!1 . This channel
can be polluted by wraparound from input channel s0 unless the number K is also
large enough to take care of the most positive index k for which the response function
rk is nonzero. For example, if r0 through r6 are nonzero, while r7 ; r8 : : : are all zero,
then we need at least K D 6 padding zeros at the end of the data: sN !6 D : : : D
sN !1 D 0.
To summarize — we need to pad the data with a number of zeros on one end
equal to the maximum positive duration or maximum negative duration of the re-
sponse function, whichever is larger. (For a symmetric response function of duration
M , you will need only M=2 zero pads.) Combining this operation with the padding
of the response rk described above, we effectively insulate the data from artifacts of
undesired periodicity. Figure 13.1.4 illustrates matters.
!
“nr3” — 2007/5/1 — 20:53 — page 644 — #666

644 Chapter 13. Fourier and Spectral Applications

response function

m+ m−

sample of original function m+

m−

convolution

spoiled unspoiled spoiled

Figure 13.1.3. The wraparound problem in convolving finite segments of a function. Not only must
the response function wrap be viewed as cyclic, but so must the sampled original function. Therefore,
a portion at each end of the original function is erroneously wrapped around by convolution with the
response function.

response function

m+ m−

original function zero padding

m− m+

not spoiled because zero

m+ m−

unspoiled spoiled
but irrelevant

Figure 13.1.4. Zero-padding as solution to the wraparound problem. The original function is extended
by zeros, serving a dual purpose: When the zeros wrap around, they do not disturb the true convolution;
and while the original function wraps around onto the zero region, that region can be discarded.

!
“nr3” — 2007/5/1 — 20:53 — page 645 — #667

13.1 Convolution and Deconvolution Using the FFT 645

13.1.2 Use of FFT for Convolution


The data, complete with zero-padding, are now a set of real numbers sj ; j D
0; : : : ; N ! 1, and the response function is zero-padded out to duration N and ar-
ranged in wraparound order. (Generally this means that a large contiguous section
of the rk ’s, in the middle of that array, is zero, with nonzero values clustered at the
two extreme ends of the array.) You now compute the discrete convolution as fol-
lows: Use the FFT algorithm to compute the discrete Fourier transform of s and of r.
Multiply the two transforms together component-by-component, remembering that
the transforms consist of complex numbers. Then use the FFT algorithm to take the
inverse discrete Fourier transform of the products. The answer is the convolution
r " s.
What about deconvolution? Deconvolution is the process of undoing the smear-
ing in a data set that has occurred under the influence of a known response function,
for example, because of the known effect of a less-than-perfect measuring apparatus.
The defining equation of deconvolution is the same as that for convolution, namely
(13.1.1), except now the left-hand side is taken to be known and (13.1.1) is to be
considered as a set of N linear equations for the unknown quantities sj . Solving
these simultaneous linear equations in the time domain of (13.1.1) is unrealistic in
most cases, but the FFT renders the problem almost trivial. Instead of multiplying
the transform of the signal and response to get the transform of the convolution, we
just divide the transform of the (known) convolution by the transform of the response
to get the transform of the deconvolved signal.
This procedure can go wrong mathematically if the transform of the response
function is exactly zero for some value Rn , so that we can’t divide by it. This indi-
cates that the original convolution has truly lost all information at that one frequency,
so that a reconstruction of that frequency component is not possible. You should be
aware, however, that apart from mathematical problems, the process of deconvolu-
tion has other practical shortcomings. The process is generally quite sensitive to
noise in the input data, and to the accuracy to which the response function rk is
known. Perfectly reasonable attempts at deconvolution can sometimes produce non-
sense for these reasons. In such cases you may want to make use of the additional
process of optimal filtering, which is discussed in !13.3.
Here is our routine for convolution and deconvolution, using the FFT as imple-
mented in realft (!12.3). The data are assumed to be stored in a VecDoub array
data[0..n-1], with n an integer power of 2. The response function is assumed to
be stored in wraparound order in a VecDoub array respns[0..m-1]. The value of m
can be any odd integer less than or equal to n, since the first thing the program does
is to recopy the response function into the appropriate wraparound order in an array
of length n. The answer is provided in ans, which is also used as working space.

void convlv(VecDoub_I &data, VecDoub_I &respns, const Int isign, convlv.h


VecDoub_O &ans) {
Convolves or deconvolves a real data set data[0..n-1] (including any user-supplied zero
padding) with a response function respns[0..m-1], where m is an odd integer ! n. The
response function must be stored in wraparound order: The first half of the array respns
contains the impulse response function at positive times, while the second half of the array
contains the impulse response function at negative times, counting down from the highest ele-
ment respns[m-1]. On input isign is C1 for convolution, "1 for deconvolution. The answer
is returned in ans[0..n-1]. n must be an integer power of 2.
Int i,no2,n=data.size(),m=respns.size();
!
“nr3” — 2007/5/1 — 20:53 — page 646 — #668

646 Chapter 13. Fourier and Spectral Applications

Doub mag2,tmp;
VecDoub temp(n);
temp[0]=respns[0];
for (i=1;i<(m+1)/2;i++) { Put respns in array of length n.
temp[i]=respns[i];
temp[n-i]=respns[m-i];
}
for (i=(m+1)/2;i<n-(m-1)/2;i++) Pad with zeros.
temp[i]=0.0;
for (i=0;i<n;i++)
ans[i]=data[i];
realft(ans,1); FFT both arrays.
realft(temp,1);
no2=n>>1;
if (isign == 1) {
for (i=2;i<n;i+=2) { Multiply FFTs to convolve.
tmp=ans[i];
ans[i]=(ans[i]*temp[i]-ans[i+1]*temp[i+1])/no2;
ans[i+1]=(ans[i+1]*temp[i]+tmp*temp[i+1])/no2;
}
ans[0]=ans[0]*temp[0]/no2;
ans[1]=ans[1]*temp[1]/no2;
} else if (isign == -1) {
for (i=2;i<n;i+=2) { Divide FFTs to deconvolve.
if ((mag2=SQR(temp[i])+SQR(temp[i+1])) == 0.0)
throw("Deconvolving at response zero in convlv");
tmp=ans[i];
ans[i]=(ans[i]*temp[i]+ans[i+1]*temp[i+1])/mag2/no2;
ans[i+1]=(ans[i+1]*temp[i]-tmp*temp[i+1])/mag2/no2;
}
if (temp[0] == 0.0 || temp[1] == 0.0)
throw("Deconvolving at response zero in convlv");
ans[0]=ans[0]/temp[0]/no2;
ans[1]=ans[1]/temp[1]/no2;
} else throw("No meaning for isign in convlv");
realft(ans,-1); Inverse transform back to time domain.
}

13.1.3 Convolving or Deconvolving Very Large Data Sets


If your data set is so long that you do not want to fit it into memory all at
once, then you must break it up into sections and convolve each section separately.
Now, however, the treatment of end effects is a bit different. You have to worry
not only about spurious wraparound effects, but also about the fact that the ends of
each section of data should have been influenced by data at the nearby ends of the
immediately preceding and following sections of data, but were not so influenced
since only one section of data is in the machine at a time.
There are two, related, standard solutions to this problem. Both are fairly obvi-
ous, so with a few words of description here, you ought to be able to implement them
for yourself. The first solution is called the overlap-save method. In this technique
you pad only the very beginning of the data with enough zeros to avoid wraparound
pollution. After this initial padding, you forget about zero-padding altogether. Bring
in a section of data and convolve or deconvolve it. Then throw out the points at each
end that are polluted by wraparound end effects. Output only the remaining good
points in the middle. Now bring in the next section of data, but not all new data. The
first points in each new section overlap the last points from the preceding section of
data. The sections must be overlapped sufficiently so that the polluted output points

!
“nr3” — 2007/5/1 — 20:53 — page 647 — #669

13.1 Convolution and Deconvolution Using the FFT 647

a b c data (in)
0 a 0

A
0 b 0

B
0 c 0

A A+B B B+C C
convolution (out)

Figure 13.1.5. The overlap-add method for convolving a response with a very long signal. The signal
data are broken up into smaller pieces. Each is zero-padded at both ends and convolved (denoted by bold
arrows in the figure). Finally the pieces are added back together, including the overlapping regions formed
by the zero-pads.

at the end of one section are recomputed as the first of the unpolluted output points
from the subsequent section. With a bit of thought you can easily determine how
many points to overlap and save.
The second solution, called the overlap-add method, is illustrated in Figure
13.1.5. Here you don’t overlap the input data. Each section of data is disjoint from
the others and is used exactly once. However, you carefully zero-pad it at both ends
so that there is no wraparound ambiguity in the output convolution or deconvolution.
Now you overlap and add these sections of output. Thus, an output point near the
end of one section will have the response due to the input points at the beginning of
the next section of data properly added in to it, and likewise for an output point near
the beginning of a section, mutatis mutandis.
Even when computer memory is available, there is some slight gain in comput-
ing speed in segmenting a long data set, since the FFTs’ N log2 N is slightly slower
than linear in N . However, the log term is so slowly varying that you will often be
much happier to avoid the bookkeeping complexities of the overlap-add or overlap-
save methods: If it is practical to do so, just cram the whole data set into memory
and FFT away. Then you will have more time for the finer things in life, some of
which are described in succeeding sections of this chapter.

CITED REFERENCES AND FURTHER READING:


Nussbaumer, H.J. 1982, Fast Fourier Transform and Convolution Algorithms (New York: Springer).
Elliott, D.F., and Rao, K.R. 1982, Fast Transforms: Algorithms, Analyses, Applications (New York:
Academic Press).
Brigham, E.O. 1974, The Fast Fourier Transform (Englewood Cliffs, NJ: Prentice-Hall), Chap-
ter 13.

You might also like