0% found this document useful (0 votes)
60 views

3 Seismic Data Processing: Reasons To Migrate

This document discusses seismic data processing techniques. It explains that seismic data processing aims to produce a clear image of subsurface structures so interpreters can accurately define drilling targets. Key steps include migrating data to correct subsurface positioning of seismic events, applying static corrections to account for near-surface velocity variations, and using deconvolution techniques like predictive deconvolution and Wiener filtering to enhance resolution. Figures are provided showing improvements from migration and examples of processing flowcharts.

Uploaded by

Muhammad Bilal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

3 Seismic Data Processing: Reasons To Migrate

This document discusses seismic data processing techniques. It explains that seismic data processing aims to produce a clear image of subsurface structures so interpreters can accurately define drilling targets. Key steps include migrating data to correct subsurface positioning of seismic events, applying static corrections to account for near-surface velocity variations, and using deconvolution techniques like predictive deconvolution and Wiener filtering to enhance resolution. Figures are provided showing improvements from migration and examples of processing flowcharts.

Uploaded by

Muhammad Bilal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 72

Seismic Interpretation

3 SEISMIC DATA PROCESSING


The point of processing the data is to attempt to get a clear acoustic image of the
subsurface in the area of interest, so that the interpreter can do his job accurately
– namely defining targets to drill. It is especially important to position correctly
the seismic events in the subsurface and this involves migrating the data to their
correct location. If seismic events are not migrated to their correct subsurface
position, any drilling prognosis will be hopelessly wrong. Before describing in
some detail the individual processes applied to the data, it is worth describing the
principles of migration.

Reasons to Migrate

Migration moves dipping reflectors to their correct subsurface location. On a


stack section the seismic events are plotted vertically below the surface mid-point
positions, whereas the energy arrives from a normal incidence location on the
actual reflector. Consequently any dipping reflector is mis-positioned on the stack
section and needs to be moved updip to its true location, Figure P-1

Figure P-1. The migration algorithm moves the seismic depth point
S to its true location at D.

P1
Seismic Interpretation

An example of the differences between stacked and migrated data is displayed in


Figures P-2 and P-3. The first figure shows the stacked data over an anticline, with
the characteristic diffraction bowties towards the bottom of the section. After
migration, the anticline is much narrower, the data on the flanks has moved updip
and the bowties have been correctly imaged into synclines. If the migration has
done its job properly, the events in the migrated section are in the correct
subsurface position. There are many reasons why the migration may be in error
and some of these will be discussed in section 3.1.16.

Figure P-2

P2
Seismic Interpretation

Figure P-3. The Migrated Section

P3
Seismic Interpretation

3.1 The Conventional Processing Sequence

Land and Marine Data Land Only


3 Vibroseis Correlation
4 Field Statics
2 Signature Deconvolution
2 Amplitude Recovery
2 Phase Correction
1 Dip Filter
3 Predictive Deconvolution
1 Specialised Demultiple
Velocity Analysis
4 Residual Statics
4 DMO
Velocity Analysis
4 Pre-Stack Time Migration
Velocity Analysis
3 NMO Correction
3 Stack
Demigrate
1 Noise Attenuation
3 Spectral Shaping
4 Migration
Final Filter and Scale

1 – Noise Reduction 2 – Amplitude and Phase Recovery

3 – Resolution Enhancement 4 – Positioning and Imaging

P4
Seismic Interpretation

3.1.1 Vibroseis Correlation

The swept frequency signal entering the ground from the base plate is up to 20s
long and contains frequencies from as low as 5 to 10Hz up to perhaps as high as
200Hz.
Each seismic reflection will be a scaled version of this signal. The amplitude
response of the signal is broad-band but the phase spectrum is complicated,
Figure P-4 and so it is necessary to simplify it and convert it into a signature
similar to that derived from an impulsive source such as an airgun. This is done by
correlating the signal measured at the baseplate of the Vibroseis truck with that
recoded by the geophones. This is in effect correlating the Vibroseis signal with
itself and is called autocorrelation. Correlation in the frequency domain is
multiplying the amplitude spectra and subtracting the phase. So autocorrelation
squares the amplitude spectrum and reduces the phase to zero, i.e. results in a
zero-phase wavelet, also called a Klauder wavelet, Figure P-5.

Figure P-4

P5
Seismic Interpretation

The zero-phase Klauder Wavelet

Figure P-5

The Vibroseis correlation is carried out in the field so the records delivered to
the processing centre will contain the klauder wavelet convolved with the
broadband earth reflectivity. Some of the subsequent processes assume the data to
be minimum phase, so it may be necessary to change the zero-phase wavelet into
its minimum phase equivalent. This is done by signature deconvolution.

3.1.2 Field Statics

Marine data is referenced to mean sea level and so the static correction involves
applying a time shift to correct for the depth of the sources and streamers. Also,
any delay in the recording instruments is allowed for at this stage.

On land the situation is much more complicated because of the topography that
may exist over the survey area and the need to remove the effect of rapid velocity
changes in the near surface, or weathering layer. A refraction survey can be shot
to determine the required velocities to enable the seismic data to be corrected to a
defined datum.

If dynamite was used as the source then shotholes would have been drilled to the
bedrock, or to below the weathering layer if bedrock was not reachable. A
geophone placed at the surface near each shothole records the uphole time and
this provides a good estimate of the velocity of the weathering layer, V w at each
shot position, Figure P-6. To obtain an estimate of the velocity of the bedrock, V b a
deeper shothole may have been drilled at the beginning of the survey and a

P6
Seismic Interpretation

number of shots fired over this zone. The travel times recorded from the shots
over this deep interval provide an estimate of the bedrock velocity.

These uphole velocities provide a valuable calibration for the refraction survey
and enable the long wavelength velocity variation to be determined. A detailed
knowledge of the topography of the survey combined with the velocity information
from the uphole data and refraction survey allows a correction to be made to a
specified datum.

Figure P-6

This is carried out in two phases. The first correction is to a smoothly varying
surface, which averages out the effects of the topography. At the end of the
processing, static corrections are applied to correct the data to a horizontal datum,
often mean sea level.

If the source used is Vibroseis then any uphole data may well not be available. A
refraction survey can be used to determine the velocities of the weathering layer
and the bedrock without any additional calibration. The first break times are

P7
Seismic Interpretation

picked for each shot record and a plot made of these picked times t against the
distance along the survey x, Figure P-7.

Figure P-7

An example of gathers before and after the application of refraction statics is


shown below in Figure P-8.

P8
Seismic Interpretation

Gathers Without Refraction Statics

Gathers With Refraction Statics Applied

Figure P-8

P9
Seismic Interpretation

3.1.3 Deconvolution

Wiener Filtering

This is a method of designing a filter which when convolved with an input signal
minimises, in a least squares sense, the difference between the actual output and
the desired output. The filter coefficients are determined by solving the matrix
equation:

When the desired output is a unit spike at zero lag (1,0,0,0,0…,0) then the
resulting filter is called a least squares inverse filter, or a spiking filter.

Some other possible choices for the desired output are:


1) a unit spike with a desired lag
2) a time advanced version of the input signal
3) a zero-phase wavelet, or
4) a wavelet of a desired shape.

Items one and two above require only the autocorrelation of the input signal to
calculate the filter coefficients. For spiking deconvolution, an important
requirement for the stability of the above equation is that the input signal is
minimum phase, which means that its inverse is also minimum phase and the
resulting filter coefficients are finite.

Predictive Deconvolution

If the desired output is a time-advanced version of the input, item 2 above, then
this gives rise to the process known as predictive deconvolution. If the input signal
is w(t) then we want to predict w(t + ), where  is the prediction lag. If the
resulting filter coefficients are (f0, f1, f2, …) and s(t) is the convolution of these
coefficients with the input signal, the error in the output is [w(t + ) – s(t)]. This
turns out to be [1,0,0,…0, -f(t)] where there are -1 zeroes in this, so called,
prediction error filter.

When convolved with the input signal, this prediction error filter yields the error
in the prediction process. An example of the use of predictive deconvolution is in
the removal of multiples. The known multiple period will be predictable while the
seismic reflections will not (they are assumed to be randomly distributed) and so if

P10
Seismic Interpretation

the prediction lag is adjusted to be a little larger than the first zero-crossing of the
autocorrelation, the error that results should be the seismic trace minus the
multiples. If the prediction lag is a little longer than the seismic wavelet then little
pulse compression occurs and the strict requirements for a minimum phase input
can be relaxed. It is common now to output a zero-phase wavelet from the
signature deconvolution and apply predictive deconvolution with a long prediction
lag, without adversely affecting the zero-phase pulse.

Predictive deconvolution is often applied both pre- and post-stack for multiple
removal.

Signature Deconvolution

Item 4 above (the output is a wavelet of a desired shape) is called signature


deconvolution, because it takes a known input wavelet, usually a source signature,
and converts it to a desired output, possibly a minimum phase wavelet. In the
vibroseis example, the autocorrelation produces a zero-phase Klauder wavelet and
this can then be converted to minimum phase by using signature deconvolution.

As an example, Figure P-9 shows the time response of a far-field signature, with its
amplitude and phase spectra. The signature has been sampled at 2 ms.

Figure P-9

P11
Seismic Interpretation

This wavelet has been used to convert the shot records to zero phase. The
designature operator is shown in Figure P-10a and the operator convolved with
the original signature is given in Figure P-10b. A comparison of a shot record
before and after signature deconvolution is shown in Figure P-11 and P-12
respectively. The change in the shape of the pulse is quite subtle in this case.

Figure P-10a

P12
Seismic Interpretation

Figure P-10b

P13
Seismic Interpretation

Figure P-11 A Shot Record before Signature Deconvolution

P14
Seismic Interpretation

Figure P-12 The Shot Record after Zero Phase Signature Deconvolution

3.1.4 Spherical Divergence and Absorption Correction

When a shot is fired, the energy radiates outwards as a sphere and at any
particular location on the surface of this sphere, the energy reduces as the sphere
expands. The correction for this loss of energy is known as the spherical
divergence correction. The surface area of a sphere of radius r is 4r2, so the
energy decreases according to the square of the distance travelled, r. The
amplitude loss is therefore proportional to r and the theoretical correction should

P15
Seismic Interpretation

be the distance travelled, Vt. At this stage in the processing sequence the velocity
profile of the earth is not known and so a correction proportional to the time
travelled is often applied.

Amplitude is also lost because of anelastic attenuation, mode conversions and


through scattering. An approximation of these amplitude losses is given by:

A = A0e-x

where A is the amplitude


 is the attenuation coefficient
x is the distance travelled

t
so an appropriate correction could be proportional to e and this is sometimes
applied. However, it is more usual to run tests on the data to see what type of
correction is adequate at this stage. It is often the case that a correction
proportional to t2 is applied to account for the combined amplitude loss due to
spherical divergence and anelastic attenuation. If historical velocities are available
then a vt2 correction can be used to balance the amplitudes.

Figure P-13(a) shows a shot record before amplitude correction and P-13(b) after
a t2 correction has been applied. It is clear from the result that this type of
correction is adequate to balance the amplitudes at longer times.

As well as a loss in amplitudes, anelastic attenuation causes a loss in high


frequency and sometimes an effort is made in the processing to try and recover
these higher frequencies in order to increase the bandwidth of the data.

The process of attempting to recover these higher frequencies is called inverse Q


filtering or Q deconvolution.

P16
Seismic Interpretation

Figure P-13(a) A Raw Shot Record

P17
Seismic Interpretation

Figure P-13(b) The Shot Record after a t2 Correction

P18
Seismic Interpretation

3.1.5 Inverse Q Filtering

Because the earth is not a perfectly elastic medium, seismic waves experience
anelastic attenuation that results in a preferential loss of high frequencies, which,
when expressed in the frequency domain, has the form:

A(e - |T/2Q
where is the angular frequency, T is the two-way travel time through the
medium and Q is a measure of the absorption, called the ‘quality factor’. The
absorption also results in a phase shift which is assumed to be minimum phase, so
is related to the amplitude spectrum by the condition:

) HT [ln A()]


where HT denotes the Hilbert Transform.

So to summarize, the effect of Q absorption is to reduce the amplitude of the


seismic trace, to introduce a time shift and to broaden the seismic pulse through
loss of high frequency. The most pronounced effects occur within the first second
of travel time. The loss in amplitude alone is corrected by the gain recovery,
described above.

Inverse Q filtering attempts to reverse both the loss in frequency and the
introduced delay to the seismic pulse, given the noise limits of the data.
Deconvolution is used to design the inverse operators of A() over time, without
amplifying the noise and these operators are applied in a time dependent fashion
down the seismic traces. An example is shown below. Figure P-14 shows the
seismic data before the application of inverse Q filtering, while Figure P-15 shows
the results of a test to compare the boost in high frequencies as a result of
assuming a Q = 125 (left half of the section) and Q = 350 (the right half). The
difference if frequency content of the two halves of the data is evident.

P19
Seismic Interpretation

Figure P-14

P20
Seismic Interpretation

Figure P-15

P21
Seismic Interpretation

3.1.6 Dip Filtering

A single shot gather and its f-k transform are shown in Figure P-16. Many
dipping events can be seen at longer offsets. These are refractions and near-
surface waves and the ground-roll can be seen as the steepest dipping event
travelling from top to bottom of the shot record. The dipping events on the gather
transform to linear events beginning from the zero wavenumber. The lowest
frequency in this data is about 7 Hz. A ‘pie-slice’ is used to define the limits of the
dipping noise and remove it in the f-k domain. The result of the dip filtering on the
shot record is shown in Figure P-17. Clearly not all the ground roll has been
removed in this example.

Figure P-16 A Raw Shot Gather with NMO applied

P22
Seismic Interpretation

Figure P-17. Shot Record with F-K Dip Filtering

3.1.7 Multiple Suppression

Usually some specialised process to remove multiple energy is applied at this


stage in the processing sequence. The example below shows a process for removing
multiples from a water bottom at about 1500 ms. The first water bottom multiple
can be seen on the gathers at 3000 ms, Figure P-18. The NMO correction has been
applied using a velocity function which over-corrects the primaries but under-
corrects the multiples, so that they can be easily distinguished. A close up of the
gathers is given in Figure P-19 and Figure P-20 shows the same gathers after the
water-bottom multiples have been removed. Only the over-corrected primaries
remain, all the under-corrected multiples are gone.

P23
Seismic Interpretation

Figure P-18

P24
Seismic Interpretation

Figure P-19

P25
Seismic Interpretation

Figure P-20

3.1.8 Velocity Analysis

We need to establish a velocity function for each gather, which can be used to
correct the moveout hyperbolae and flatten all the events. Every gather will
require a different function but instead of analysing all the gathers, it is sufficient
to make an analysis every 250m or 500m and interpolate the velocities in between.
In a 3D data volume we will end up with an interpolated velocity cube with actual
analysed velocities located at regular samples throughout the cube.

In order to establish the optimum velocity function to correct a single gather, a


number of techniques are employed. Perhaps the most common method uses
VELANS and an example is shown in Figure P-21.

P26
Seismic Interpretation

GATHER STACK PANELS SEMBLANCE PLOT

Figure P-21.

A range of velocities is applied to a single gather, shown on the left side of the
figure above, and for each velocity a coherence measure (called the semblance) is
calculated over a window, usually about 120ms wide, and output, shown on the
right hand side of the figure. The window is moved later in time and the process
repeated until the entire length of the gather has been covered. The semblance
values are contoured and the velocity function extracted by joining through the
contour peaks; seen as the white line joining the peaks of the plot on the right. The
central section shows the stack panels. Here, the gathers around and including the
gather displayed in the figure have been stacked with the velocity functions shown
as the feint white lines in the semblance plot and labelled V4 to V10. This allows
the optimal velocity function to be chosen.

In conjunction with the semblance plots, velocity picking can be augmented by the
use of Constant Velocity Stacks (CVS) where a gather is stacked repeatedly at
different velocities and also with velocity fans in which the data is stacked with
percentage variations of a picked velocity function. Again this allows the velocities
to be chosen which produce the best stack response.

P27
Seismic Interpretation

3.1.9 Residual Statics Correction

When the gathers have been corrected using a velocity function, the best stack
response occurs if the events are perfectly flat. If there are statics present on the
gather, then the events will not be flat and this will degrade the quality of the
stack. Residual statics will correct this situation by estimating the time shifts
required, applying them automatically and so flattening the events in preparation
for stacking. The key assumption in estimating the required time shifts is that they
are the result of the source and receiver locations at the surface and are not due to
raypath bending in the subsurface. This means that the raypaths are assumed to
travel vertically through the near surface layers, Figure P-22.

The residual statics correction involves the following procedure:-

(1) Pick the travel times of the event to be corrected on the gathers
(2) Decompose the travel times into the constituent components, namely the
source and receiver statics, the structural variation along the horizon and the
residual moveout term, which takes into account the poor moveout
correction of the horizon.
(3) Apply the source and receiver static shifts to the pre-NMO corrected gathers.

Figure P-22.

The travel time picking is done automatically by choosing a reference trace and
then crosscorrelating it with the other traces in the gather. The decomposition into
the individual terms is done in a least squares sense by solving a large number of
equations in an efficient way. And finally the source and receiver shifts obtained

P28
Seismic Interpretation

from the decomposition are applied to the gathers. An example of the benefits of
applying residual statics is shown below.

(a)

(b) Residual statics (detail)

Figure P-23.

P29
Seismic Interpretation

(a) CMP/shot/receiver stacks (detail) – no residuals

(b) CMP/shot/receiver stacks (detail) – residual statics

(c) CMP/shot/receiver stacks (detail) – residual statics and hand statics

Figure P-24

P30
Seismic Interpretation

(a) CMPs with residual statics and hand statics

(b) CMPs with TV trim statics

Figure P-25

P31
Seismic Interpretation

The top panel of Figure P-23 shows the stacked data without the application of
residual statics. This data has a major statics problem, which results in lack of
reflector continuity and loss of high frequencies. Multiple passes of surface
consistent residual statics were applied, together with manual statics picked from
common shot and receiver stacks (Figure P-25). These contributed to a dramatic
improvement in reflector continuity, most obviously at c. 0.9-1.0s (Figure P-23(a)
middle panel, and P-23(b), Figure P-24(c)). However, despite the intensive residual
static effort, static anomalies remained, evident in displays of the unstacked CMPs
(Figure P-25(a)). Also the high-frequency character of the original processing
remained elusive. Time-varying CMP consistent trim statics were applied (Figure
P-23(a) bottom, Figure P-23(b) right, Figure P-25(b)). These were computed
CMP-by-CMP every 200ms over 800ms windows, with a maximum shift of 8ms.
A considerable improvement in high-frequency detail resulted which matched the
original processing. No obvious artefacts were introduced into the section.

3.1.10 Binning

The data are sorted into Common Midpoint (CMP) Gathers ready for normal
moveout correction and stacking. With 3D data this gathering process is known as
binning. As described in the section on seismic acquisition, the geometry is
designed to sample each reflection point in the subsurface n times, where n is the
fold of data required and the reflection points are assumed to lie at the common
midpoint positions of the offsets.

The shooting geometry controls the bin size and each shot-receiver pair is
assigned to a bin based on its midpoint position. However, because of cable
feathering and possible navigation obstacles, such as producing platforms and
military zones, the number of reflection points within a bin can vary significantly
from one bin to another. This variation degrades velocity analysis, stacking and
migration.

P32
Seismic Interpretation

Figure P-26
The best seismic processing results are obtained when the bins all contain
reflection points from near offsets and include the entire offset range from
predominantly the same sail line. In order to try and obtain a uniform number of
reflection points within each bin and to include a full range of offsets, the bin size
is often expanded or ‘flexed’ to incorporate traces occupying an adjacent bin.
Figure P-26 shows fifteen bins, which could be 25m x 12.5m in size, and the
possible inline and crossline expansion factors. It is usual to expand the bin in the
crossline direction only and maintain the original inline bin dimension.

Figure P-27(a)

P33
Seismic Interpretation

Figure P-27(b)

Figure P-27(a) shows the fold of coverage over a survey area using a static bin
size. In Figure P-27(b) the bins have been expanded in the crossline direction by a
factor of two, and this produces a more uniform fold within the bins.

3.1.11 Normal Moveout

If the seismic data were acquired with the source and receiver located in the
same surface position, the result would be a zero-offset, or normal incidence
section. In practice, it is necessary to acquire the data over a wide range of offsets
in order to stack the data and increase the signal to noise ratio. The resulting
stacked section is an approximation to a normal incidence section, the degree of
approximation being dependent on the geological complexity in the subsurface. In
order to stack the data it is necessary to correct all the events in the gather to their
zero-offset time. The time difference between the offset ray path and the normal
incidence ray path is called normal moveout and the correction of tx to t0 is the
normal moveout correction, Figure P-28.

P34
Seismic Interpretation

Figure P-28

From the triangle XRG,

And the moveout t = tx – t0, Figure P-29.

Figure P-29. A Common MidPoint Gather

For any observed travel time, tx and offset, a trial velocity is used in the travel
time equation to calculate the moveout t. This is applied to the gather. The aim is
to determine the velocity function which flattens all the primary events on the
gather so that all the traces in the gather can be stacked to produce a single output
trace. In this way, the primary events are enhanced while noise and multiples,

P35
Seismic Interpretation

which will have residual moveout, are degraded. Figure P-30 shows an example of
NMO corrected gathers using the standard two-term NMO equation.

Figure P-30. NMO Corrected Gathers Using the 2-Term Equation

NMO for plane horizontal layers

Obviously the single-layer, constant velocity earth model shown in Figure P-28 is
far too simple and the next level of complication to employ is to assume that the
earth is a sequence of plain, horizontal layers. The travel time equation for this
case has an infinite number of terms, the first three of which are shown below (this
equation is derived in section 5.6.2).

P36
Seismic Interpretation

For offsets up to about the depth of interest the two-term equation, with the
velocity V assumed to be equivalent to V rms is sufficient to correct the moveout on
the gathers. For longer offsets, the three-term equation is necessary to flatten the
events at the farthest offsets. An example of using the three-term equation on the
same set of gathers displayed in Figure P-30, is shown in Figure P-31. When
comparing the two figures, it is clear that some events are flatter over the full
range of offsets when using the three-term NMO equation.

Figure P-31. NMO Corrected Gathers Using the 3-Term Equation

P37
Seismic Interpretation

However, not all events in Figure P-31 are truly flat. Some still show residual
moveout at the far offsets. This may be the result of anisotropy, which is a
commonly observed effect because the velocity of sound in a medium is
azimuthally dependent. As a first approximation, it is assumed that the earth is
vertically transversely isotropic, which means that acoustic energy travels faster in
the horizontal direction than in the vertical, but within the horizontal plane the
velocity is independent of azimuth. Seismic

Figure P-32. NMO correction, including a 10% anisotropy factor

energy travelling from the shots to the receivers has a horizontal component and
this component increases with offset. Figure P-32 shows the gathers corrected with
the three-term NMO equation and an additional term which assumes that the
horizontal seismic velocity is 10% faster that the vertical.

NMO for dipping horizons

The travel time equation has to be modified in the presence of dipping layers by
replacing the velocity v by v/cos (Levin, 1971) and is written as

P38
Seismic Interpretation

In order to recover the RMS velocity v, the velocities used to stack the data in
the presence of dip must be divided by the cosine of that dip. This is done by
posting and contouring the rms velocities and then applying some form of
smoothing filter. The cosine correction is applied to these smoothed rms velocities.
It is usual to include DMO or dip moveout in the processing sequence, see section
3.1.12. This process removes the dip dependence of the stacking velocity and so the
cosine correction is not necessary in this case.

NMO stretch and muting

For a zero offset time of 1.0s and a velocity of 3000m/sec, the travel time at an
offset of 6000m is 2.236s. So t is 1.236s and an event at this offset has to be moved
by 1.236s to correct it to t0. If the wavelet is 60 msec long, it will be stretched over
this time gate after NMO and will therefore end up being much reduced in
frequency. The NMO stretch is given by:

where f is the dominant frequency of the wavelet, f is the change in frequency


brought about by the NMO correction, tNMO is tx – to the difference between the
time along path SRG, Figure P-26, and the normal incidence time.

For the above example, if the dominant frequency f is 25Hz, then f, the change
in frequency caused by the NMO correction, is 31Hz. This is greater than the
dominant frequency and so the wavelet will be stretched to a very low frequency
and will degrade the stack. For this reason, a mute is applied to zero any data that
will suffer an NMO stretch of more than about 40%. The resulting stack data will
therefore vary in fold over the length of the mute, from perhaps two or three fold
near time zero and increasing progressively to be full fold at the end of the mute.

The seismic section below, Figure P-33 shows the mute on the far left of the
section. The data are full fold where the mute stops at about 3.5s. The fold of the

P39
Seismic Interpretation

data gradually decreases from this time up to the surface, where it may be only
two or three fold, all the remaining traces in the gather having been removed by
the mute.

Figure P-33

3.1.12 CMP Raytracing Example

The depth model is the same as that used in the acquisition example of shot –
receiver raytracing:

P40
Seismic Interpretation

P41
Seismic Interpretation

This type of raytracing models the CMP gathers that will be obtained from
collecting all the shots and receivers that have a common midpoint location.

In this example the CMP is at surface position 5000.

P42
Seismic Interpretation

The result of the CMP raytacing for surface position 5000

P43
Seismic Interpretation

3.1.13 Dip Moveout

With a plane horizontal layer the reflection point associated with a Common
Midpoint (CMP) gather occurs in the same location over all offsets, Figure P-34. If
the layer is dipping then the reflection point on the layer moves updip as the
offsets increase.

Figure P-34

Also, the stacking velocity for a dipping layer increases as 1/cosine of the dip. So
for the above example with a layer dipping at 18 0 the stacking velocity is
2103m/sec instead of 2000m/sec for the horizontal case. Obviously it is necessary to
correct for these effects so that we can stack the data at the correct velocity and
remove the reflector point dispersal and so produce a true zero-offset section. The
process that is designed to make these corrections is called Dip Moveout or DMO,
Figure P-35.

P44
Seismic Interpretation

Figure P-35

The processing sequence to apply DMO is as follows:

1) Velocity Analysis

2) NMO correction using velocities for flat events

3) Sort data into common offset sections

4) Apply DMO correction

5) Sort back into CMP gathers

6) Remove NMO correction using step 1 velocities

7) Velocity Analysis

8) NMO correction using optimum stacking velocities

An example of the application of DMO is shown below, Figures P-36 to P-38.

P45
Seismic Interpretation

Figure P-36 A mid-offset plane before the application of DMO

P46
Seismic Interpretation

Figure P-37. The single offset plane shown in figure 35, but with DMO
applied.

P47
Seismic Interpretation

Figure P-38. DMO has been applied to the same data as in


figures 36 and 37, but here two adjacent offset planes
have been interleaved before DMO and a single offset
plane is output from the process. This interleaving is
done to increase spatial sampling and so reduce
aliasing. Note the improved definition in the steep
dips and the reduction in the banding in the data,
compared with the previous two figures.

P48
Seismic Interpretation

3.1.14 Prestack Time Migration

In a standard processing sequence, prestack time migration is applied to the


NMO and DMO corrected common offset sections. This is often performed using
common offset migration with a Stolt or extended Stolt algorithm, see section
3.1.17.

With reference to Figure P-39, the time migration will move the reflection point
from the normal incidence position NI to the true image position I M vertically
above the reflection point R. Any one of the migration algorithms discussed under
the Migration section (3.1.17) is capable of achieving this, but the simplest
approach is to use a constant velocity migration, which will improve the velocity
analysis and can be easily removed by demigration after stack, in preparation for
a variable velocity migration.

Figure P-39

A possible processing flow to include PSTM would be the same as that for DMO
application above, up to stage 4 and then:

P49
Seismic Interpretation

5) Migrate each common offset section with a constant velocity

6) Sort the common offset data back to CMP gathers

7) Remove NMO correction using step 1 velocities

8) Velocity Analysis

9) NMO correction using optimum stacking velocities

10) Stack the data

11) Perform inverse migration using the step 5 constant velocity

An example of the benefits of applying this processing sequence are shown in


Figures P-40 and P-41. Figure P-40 shows the data after the application of DMO.

P50
Seismic Interpretation

Figure P-40

Figure P-41 shows the data after the application of Pre-Stack Time Migration
using a constant velocity. As an interesting comparison, Figures P-42 and P-43
show the same data set, but with the DMO and PSTM processes applied using a
single, variable velocity function, instead of a constant velocity. The PSTM results
with the variable velocity are superior to those with the constant velocity;
particularly in the deeper parts of the section around 4500ms, as one would
anticipate. In this instance, the constant velocity result was considered good
enough to improve the velocity picking and therefore the stack response, and it is
easier to demigrate a constant velocity migration than one that has used a variable
velocity. The demigration process is called ‘reverse time migration’.

P51
Seismic Interpretation

Figure P-41

P52
Seismic Interpretation

Figure P-42

P53
Seismic Interpretation

Figure P-43

In areas of moderate structural complexity it is now common to use a full


Kirchhoff 3D PSTM instead of applying DMO and the a Stolt-type PSTM with
either a constant velocity or a V(t) function. The Kirchhoff algorithm requires a
full 3D velocity field, which is not always readily available. But if the 3D data are
being reprocessed, re-shot or acquired in a mature basin, then the necessary
velocity data may well be available.

The Kirchhoff summation method can be used to migrate the 3D data in one-
pass. The migration works on common-offset data and the image is formed by
summing weighted amplitudes along diffraction curves that are constructed using
the rms velocity from the diffraction travel time equation

P54
Seismic Interpretation

The algorithm works well in the presence of strong vertical velocity contrasts
and handles gradual lateral velocity variations by using the three-term moveout
equation, implying that the moveout under these conditions is still hyperbolic. In
fact, lateral velocity variations give rise to non-hyperbolic moveout and under
these circumstances prestack depth migration is required. So a judgment must be
made regarding the severity of the lateral velocity variation and whether
Kirchhoff PSTM will correctly image the data.

3.1.15 PSTM and Velocity Analysis

The benefit on the resulting velocity analyses of applying prestack time


migration is shown in Figures P-44. The red line on this figure is the original
velocity function, which is quite different from the function in black derived from
the reprocessed data that includes PSTM.

P55
Seismic Interpretation

Figure P-44

Figure P-45 shows the result of stacking the data with the original velocities and
this should be compared to the stack in Figure P-46 using the revised velocity field
derived from pre-stack time migrated data..

P56
Seismic Interpretation

Figure P-45

P57
Seismic Interpretation

Figure P-46

3.1.16 Stacking

The traces within a common midpoint gather are summed together to produce a
single stacked trace. All gathers are treated in this way and the result is a
migrated stack section but, unless a full 3D Kirchhoff PSTM has been applied,
with a sub-optimal migration applied prestack. Consequently, the data are
demigrated to remove the effects of the migration so that an optimal post-stack
processing sequence can be applied. The data in Figure P-47 has been pre-stack
time migrated using a constant velocity of 1600 m/s, the velocities have been
picked, the data stacked and then demigrated using the same constant velocity.

P58
Seismic Interpretation

Figure P-47

3.1.17 Normal Incidence Raytracing Example

P59
Seismic Interpretation

The result of Normal Incidence Raytracing the depth model.

P60
Seismic Interpretation

The modelled stacked time section

3.1.18 Noise Attenuation After Stack

Noise attenuation, for example F-X filtering, can be applied after stack to
remove random noise in the data. The signal is assumed to be the correlatable
components in the data and the filter is applied in the F-X domain to remove those
frequency components that differ from those of the predicted signal. This is
sometimes called F-X deconvolution but it is not a deconvolution in the strict
sense, merely a prediction filter. This process has been applied to the data in
Figure P-46 and the result is shown in Figure P-48.

P61
Seismic Interpretation

Figure P-48

Predictive deconvolution can also be used after stack to eliminate short-period


multiples. These are often seen as reverberations below a primary event. Figure P-
49 (top section) shows a stacked data set that exhibits some evidence of
reverberation. After predictive deconvolution, the primary seismic pulse is shorter
with the collapse of the reverberating wavetrain, Figure P-49 bottom section.

P62
Seismic Interpretation

Figure P-49

P63
Seismic Interpretation

3.1.19 Migration

Migration Formulae

Dips on the stack section are related to dips on the migrated data by

tan = sin

It is straightforward to derive formulae for the distance x and time t that an


event will move under migration. It is often instructive to use these formulae to
demigrate a seismic section, using the applied migration velocities and then to
remigrate with a different velocity field. In this way, one can observe which events
are over or under-migrated. Often, the steeper the dip of an event, the more
incorrect the migration will turn out to be and this can have a big impact if a
drilling target is located on steep dips. The mis-positioning of events will produce a
totally different drilling result when compared to the prognosis.

P64
Seismic Interpretation

The Exploding Reflector Model

The stack section represents a zero offset section, which is the section that would
have been recorded if the source and receiver were in the same surface location.
An alternative way to think about this is to consider the situation where the
sources are located on the reflector itself, with the receivers at the surface. When
the sources explode, the energy that arrives at the receivers first has left the
reflector normally because, by Fermat’s Principle, this is the shortest time path.
When using the exploding reflector model, the times are made correct by using

P65
Seismic Interpretation

half the layer velocity, since the energy has only travelled one-way to reach the
detectors on the surface. The exploding reflector principle is used in finite difference
migration algorithms.

When to Migrate – Time or Depth

Time Migration after Stack:- Used with low dips (may be <
150) and small lateral velocity contrasts. Precisely what constitutes ‘small’ depends
on the amount of refraction, and the associated inaccuracy in positioning of
reflectors, that the interpreter is prepared to ignore

Depth Migration after Stack:- Used when the lateral velocity


contrasts cause refraction through dipping layers which shifts the reflectors
laterally by a significant amount. The definition of ‘significant’ depends on the
accuracy required in predicting a drilling target.

Pre-Stack Time Migration (PSTM):- Used when conflicting


dips on the gathers make it impossible to pick velocities accurately. It is
implemented in two different ways.
1) A hybrid method, described in some detail here, whereby DMO is applied to
the data and then it is migrated prestack using a constant velocity. This migration
is removed post stack in preparation for a variable velocity time migration to
correctly position the events.
2) A full Kirchhoff 3-D time migration using a variable velocity field. This is a
non- zero offset algorithm and so DMO is NOT applied with this process.

Pre-Stack Depth Migration (PSDM):- Used where strong


lateral velocity variations, such as in salt provinces, make it necessary to migrate
the data before stack using a fully defined 3D velocity field.

Migration Algorithms

Migration algorithms are based on the one-way scalar wave equation and
assume that all the data presented to them are primary reflections. The three main
categories of algorithms are:

1) those based on the integral solution to the scalar wave equation

2) those based on the finite-difference solution of the scalar wave equation, and

3) those implemented in the frequency-wavenumber domain.

There are many different implementations of these three types of algorithms,


which vary predominantly in the way they handle steep dips and lateral velocity
variations.

P66
Seismic Interpretation

Kirchhoff Summation

This technique sums the amplitudes along a diffraction hyperbola whose shape is
controlled by the subsurface velocity specified. Corrections are made in order to
take into account spherical spreading, the variation of amplitude with reflection
angle and the phase shift associated with Huygens’ secondary sources. These
algorithms are generally good at handling dips up to 90 degrees but have
limitations in their use with lateral velocity variations.

Finite Difference Methods

The finite difference techniques are based on the fact that the stack section can
be modelled by the exploding reflector concept, described above. Migration is then
seen as downward continuation of the seismic wavefield followed by imaging. The
imaging condition is effected by setting t = 0 in the extrapolated wavefield. This
can be easily understood because at t = 0 the wavefront has not travelled any
distance and so is still at its origin on the reflector. It therefore has the shape of the
reflector.

There are many types of finite difference approximations to the differential


operators in the scalar wave equation both in the X-T and the F-X domains. These
algorithms are good at handling lateral velocity contrasts but are usually
restricted to a maximum dip approximation.

Frequency – Wavenumber Migration

Stolt Migration

This was initially developed as a constant velocity migration and involves a


Fourier transform to the frequency domain and then a coordinate transform to
the vertical wavenumber while keeping the horizontal wavenumber constant.
Later, Stolt added stretching in the time axis to allow for vertical velocity
variations. This is known as the ‘extended Stolt’ algorithm. It has limited use in
handling lateral velocity changes but is the algorithm that is used when applying
PSTM with a constant velocity.

Gazdag Migration

This is based on the equivalence of downward continuation in the time domain


to a phase-shift in the frequency-wavenumber domain. The imaging principle is
invoked by summing over the frequency components of the extrapolated wavefield
at each depth step, to obtain the image at t = 0. Again this technique has
limitations when dealing with lateral velocity variations but can be used for dips
up to 90 degrees.

P67
Seismic Interpretation

An example of the effects of migration is given in Figure P-50. The input is the
data displayed in Figure P-48. These data have undergone two passes of
migration. Firstly, a 3D extended Stolt f-k migration was used in the inline
direction, followed by a finite difference migration in the xline direction.

Figure P-50

2D Migration

2D lines in the dip direction will experience the appropriate shifts in time and
space under migration. However, the strike lines, being flat, will not be shifted at
all. The consequence of this is that 2D lines no longer tie at intersections when they
have been migrated. The correct solution to migration of 2D data is to interpret
the stack data and use this to generate the time maps. These time maps then have
to be migrated and there are a number of applications on the market to do this.
They all work in a similar way, which is effectively to use profiles drawn
perpendicular to the contours. These profiles are then migrated, Figure P-51, and
used to annotate the map with the migrated times. They are contoured, using the
original map as a guide.

P68
Seismic Interpretation

Figure P-51. Map Migration

P69
Seismic Interpretation

3.1.20 Filter and Scaling

As a final process, the data are often filtered to remove high frequency noise and
scaling to give the data an amplitude balance over the complete time interval. The
data in the bottom section of Figure 49 have had applied a final filter and scaling
and the result is shown in Figure P-52.

Figure P-52

3.1.21 Image-Ray Raytracing

P70
Seismic Interpretation

Image-rays traced through the depth model

P71
Seismic Interpretation

The raytraced migrated time section, showing the effects of refraction through
the model

The large amplitudes at the fault plane are spurious, presumably as a result of
the steep dip of the horizon here.

P72

You might also like