3 Seismic Data Processing: Reasons To Migrate
3 Seismic Data Processing: Reasons To Migrate
Reasons to Migrate
Figure P-1. The migration algorithm moves the seismic depth point
S to its true location at D.
P1
Seismic Interpretation
Figure P-2
P2
Seismic Interpretation
P3
Seismic Interpretation
P4
Seismic Interpretation
The swept frequency signal entering the ground from the base plate is up to 20s
long and contains frequencies from as low as 5 to 10Hz up to perhaps as high as
200Hz.
Each seismic reflection will be a scaled version of this signal. The amplitude
response of the signal is broad-band but the phase spectrum is complicated,
Figure P-4 and so it is necessary to simplify it and convert it into a signature
similar to that derived from an impulsive source such as an airgun. This is done by
correlating the signal measured at the baseplate of the Vibroseis truck with that
recoded by the geophones. This is in effect correlating the Vibroseis signal with
itself and is called autocorrelation. Correlation in the frequency domain is
multiplying the amplitude spectra and subtracting the phase. So autocorrelation
squares the amplitude spectrum and reduces the phase to zero, i.e. results in a
zero-phase wavelet, also called a Klauder wavelet, Figure P-5.
Figure P-4
P5
Seismic Interpretation
Figure P-5
The Vibroseis correlation is carried out in the field so the records delivered to
the processing centre will contain the klauder wavelet convolved with the
broadband earth reflectivity. Some of the subsequent processes assume the data to
be minimum phase, so it may be necessary to change the zero-phase wavelet into
its minimum phase equivalent. This is done by signature deconvolution.
Marine data is referenced to mean sea level and so the static correction involves
applying a time shift to correct for the depth of the sources and streamers. Also,
any delay in the recording instruments is allowed for at this stage.
On land the situation is much more complicated because of the topography that
may exist over the survey area and the need to remove the effect of rapid velocity
changes in the near surface, or weathering layer. A refraction survey can be shot
to determine the required velocities to enable the seismic data to be corrected to a
defined datum.
If dynamite was used as the source then shotholes would have been drilled to the
bedrock, or to below the weathering layer if bedrock was not reachable. A
geophone placed at the surface near each shothole records the uphole time and
this provides a good estimate of the velocity of the weathering layer, V w at each
shot position, Figure P-6. To obtain an estimate of the velocity of the bedrock, V b a
deeper shothole may have been drilled at the beginning of the survey and a
P6
Seismic Interpretation
number of shots fired over this zone. The travel times recorded from the shots
over this deep interval provide an estimate of the bedrock velocity.
These uphole velocities provide a valuable calibration for the refraction survey
and enable the long wavelength velocity variation to be determined. A detailed
knowledge of the topography of the survey combined with the velocity information
from the uphole data and refraction survey allows a correction to be made to a
specified datum.
Figure P-6
This is carried out in two phases. The first correction is to a smoothly varying
surface, which averages out the effects of the topography. At the end of the
processing, static corrections are applied to correct the data to a horizontal datum,
often mean sea level.
If the source used is Vibroseis then any uphole data may well not be available. A
refraction survey can be used to determine the velocities of the weathering layer
and the bedrock without any additional calibration. The first break times are
P7
Seismic Interpretation
picked for each shot record and a plot made of these picked times t against the
distance along the survey x, Figure P-7.
Figure P-7
P8
Seismic Interpretation
Figure P-8
P9
Seismic Interpretation
3.1.3 Deconvolution
Wiener Filtering
This is a method of designing a filter which when convolved with an input signal
minimises, in a least squares sense, the difference between the actual output and
the desired output. The filter coefficients are determined by solving the matrix
equation:
When the desired output is a unit spike at zero lag (1,0,0,0,0…,0) then the
resulting filter is called a least squares inverse filter, or a spiking filter.
Items one and two above require only the autocorrelation of the input signal to
calculate the filter coefficients. For spiking deconvolution, an important
requirement for the stability of the above equation is that the input signal is
minimum phase, which means that its inverse is also minimum phase and the
resulting filter coefficients are finite.
Predictive Deconvolution
If the desired output is a time-advanced version of the input, item 2 above, then
this gives rise to the process known as predictive deconvolution. If the input signal
is w(t) then we want to predict w(t + ), where is the prediction lag. If the
resulting filter coefficients are (f0, f1, f2, …) and s(t) is the convolution of these
coefficients with the input signal, the error in the output is [w(t + ) – s(t)]. This
turns out to be [1,0,0,…0, -f(t)] where there are -1 zeroes in this, so called,
prediction error filter.
When convolved with the input signal, this prediction error filter yields the error
in the prediction process. An example of the use of predictive deconvolution is in
the removal of multiples. The known multiple period will be predictable while the
seismic reflections will not (they are assumed to be randomly distributed) and so if
P10
Seismic Interpretation
the prediction lag is adjusted to be a little larger than the first zero-crossing of the
autocorrelation, the error that results should be the seismic trace minus the
multiples. If the prediction lag is a little longer than the seismic wavelet then little
pulse compression occurs and the strict requirements for a minimum phase input
can be relaxed. It is common now to output a zero-phase wavelet from the
signature deconvolution and apply predictive deconvolution with a long prediction
lag, without adversely affecting the zero-phase pulse.
Predictive deconvolution is often applied both pre- and post-stack for multiple
removal.
Signature Deconvolution
As an example, Figure P-9 shows the time response of a far-field signature, with its
amplitude and phase spectra. The signature has been sampled at 2 ms.
Figure P-9
P11
Seismic Interpretation
This wavelet has been used to convert the shot records to zero phase. The
designature operator is shown in Figure P-10a and the operator convolved with
the original signature is given in Figure P-10b. A comparison of a shot record
before and after signature deconvolution is shown in Figure P-11 and P-12
respectively. The change in the shape of the pulse is quite subtle in this case.
Figure P-10a
P12
Seismic Interpretation
Figure P-10b
P13
Seismic Interpretation
P14
Seismic Interpretation
Figure P-12 The Shot Record after Zero Phase Signature Deconvolution
When a shot is fired, the energy radiates outwards as a sphere and at any
particular location on the surface of this sphere, the energy reduces as the sphere
expands. The correction for this loss of energy is known as the spherical
divergence correction. The surface area of a sphere of radius r is 4r2, so the
energy decreases according to the square of the distance travelled, r. The
amplitude loss is therefore proportional to r and the theoretical correction should
P15
Seismic Interpretation
be the distance travelled, Vt. At this stage in the processing sequence the velocity
profile of the earth is not known and so a correction proportional to the time
travelled is often applied.
A = A0e-x
t
so an appropriate correction could be proportional to e and this is sometimes
applied. However, it is more usual to run tests on the data to see what type of
correction is adequate at this stage. It is often the case that a correction
proportional to t2 is applied to account for the combined amplitude loss due to
spherical divergence and anelastic attenuation. If historical velocities are available
then a vt2 correction can be used to balance the amplitudes.
Figure P-13(a) shows a shot record before amplitude correction and P-13(b) after
a t2 correction has been applied. It is clear from the result that this type of
correction is adequate to balance the amplitudes at longer times.
P16
Seismic Interpretation
P17
Seismic Interpretation
P18
Seismic Interpretation
Because the earth is not a perfectly elastic medium, seismic waves experience
anelastic attenuation that results in a preferential loss of high frequencies, which,
when expressed in the frequency domain, has the form:
A(e - |T/2Q
where is the angular frequency, T is the two-way travel time through the
medium and Q is a measure of the absorption, called the ‘quality factor’. The
absorption also results in a phase shift which is assumed to be minimum phase, so
is related to the amplitude spectrum by the condition:
Inverse Q filtering attempts to reverse both the loss in frequency and the
introduced delay to the seismic pulse, given the noise limits of the data.
Deconvolution is used to design the inverse operators of A() over time, without
amplifying the noise and these operators are applied in a time dependent fashion
down the seismic traces. An example is shown below. Figure P-14 shows the
seismic data before the application of inverse Q filtering, while Figure P-15 shows
the results of a test to compare the boost in high frequencies as a result of
assuming a Q = 125 (left half of the section) and Q = 350 (the right half). The
difference if frequency content of the two halves of the data is evident.
P19
Seismic Interpretation
Figure P-14
P20
Seismic Interpretation
Figure P-15
P21
Seismic Interpretation
A single shot gather and its f-k transform are shown in Figure P-16. Many
dipping events can be seen at longer offsets. These are refractions and near-
surface waves and the ground-roll can be seen as the steepest dipping event
travelling from top to bottom of the shot record. The dipping events on the gather
transform to linear events beginning from the zero wavenumber. The lowest
frequency in this data is about 7 Hz. A ‘pie-slice’ is used to define the limits of the
dipping noise and remove it in the f-k domain. The result of the dip filtering on the
shot record is shown in Figure P-17. Clearly not all the ground roll has been
removed in this example.
P22
Seismic Interpretation
P23
Seismic Interpretation
Figure P-18
P24
Seismic Interpretation
Figure P-19
P25
Seismic Interpretation
Figure P-20
We need to establish a velocity function for each gather, which can be used to
correct the moveout hyperbolae and flatten all the events. Every gather will
require a different function but instead of analysing all the gathers, it is sufficient
to make an analysis every 250m or 500m and interpolate the velocities in between.
In a 3D data volume we will end up with an interpolated velocity cube with actual
analysed velocities located at regular samples throughout the cube.
P26
Seismic Interpretation
Figure P-21.
A range of velocities is applied to a single gather, shown on the left side of the
figure above, and for each velocity a coherence measure (called the semblance) is
calculated over a window, usually about 120ms wide, and output, shown on the
right hand side of the figure. The window is moved later in time and the process
repeated until the entire length of the gather has been covered. The semblance
values are contoured and the velocity function extracted by joining through the
contour peaks; seen as the white line joining the peaks of the plot on the right. The
central section shows the stack panels. Here, the gathers around and including the
gather displayed in the figure have been stacked with the velocity functions shown
as the feint white lines in the semblance plot and labelled V4 to V10. This allows
the optimal velocity function to be chosen.
In conjunction with the semblance plots, velocity picking can be augmented by the
use of Constant Velocity Stacks (CVS) where a gather is stacked repeatedly at
different velocities and also with velocity fans in which the data is stacked with
percentage variations of a picked velocity function. Again this allows the velocities
to be chosen which produce the best stack response.
P27
Seismic Interpretation
When the gathers have been corrected using a velocity function, the best stack
response occurs if the events are perfectly flat. If there are statics present on the
gather, then the events will not be flat and this will degrade the quality of the
stack. Residual statics will correct this situation by estimating the time shifts
required, applying them automatically and so flattening the events in preparation
for stacking. The key assumption in estimating the required time shifts is that they
are the result of the source and receiver locations at the surface and are not due to
raypath bending in the subsurface. This means that the raypaths are assumed to
travel vertically through the near surface layers, Figure P-22.
(1) Pick the travel times of the event to be corrected on the gathers
(2) Decompose the travel times into the constituent components, namely the
source and receiver statics, the structural variation along the horizon and the
residual moveout term, which takes into account the poor moveout
correction of the horizon.
(3) Apply the source and receiver static shifts to the pre-NMO corrected gathers.
Figure P-22.
The travel time picking is done automatically by choosing a reference trace and
then crosscorrelating it with the other traces in the gather. The decomposition into
the individual terms is done in a least squares sense by solving a large number of
equations in an efficient way. And finally the source and receiver shifts obtained
P28
Seismic Interpretation
from the decomposition are applied to the gathers. An example of the benefits of
applying residual statics is shown below.
(a)
Figure P-23.
P29
Seismic Interpretation
Figure P-24
P30
Seismic Interpretation
Figure P-25
P31
Seismic Interpretation
The top panel of Figure P-23 shows the stacked data without the application of
residual statics. This data has a major statics problem, which results in lack of
reflector continuity and loss of high frequencies. Multiple passes of surface
consistent residual statics were applied, together with manual statics picked from
common shot and receiver stacks (Figure P-25). These contributed to a dramatic
improvement in reflector continuity, most obviously at c. 0.9-1.0s (Figure P-23(a)
middle panel, and P-23(b), Figure P-24(c)). However, despite the intensive residual
static effort, static anomalies remained, evident in displays of the unstacked CMPs
(Figure P-25(a)). Also the high-frequency character of the original processing
remained elusive. Time-varying CMP consistent trim statics were applied (Figure
P-23(a) bottom, Figure P-23(b) right, Figure P-25(b)). These were computed
CMP-by-CMP every 200ms over 800ms windows, with a maximum shift of 8ms.
A considerable improvement in high-frequency detail resulted which matched the
original processing. No obvious artefacts were introduced into the section.
3.1.10 Binning
The data are sorted into Common Midpoint (CMP) Gathers ready for normal
moveout correction and stacking. With 3D data this gathering process is known as
binning. As described in the section on seismic acquisition, the geometry is
designed to sample each reflection point in the subsurface n times, where n is the
fold of data required and the reflection points are assumed to lie at the common
midpoint positions of the offsets.
The shooting geometry controls the bin size and each shot-receiver pair is
assigned to a bin based on its midpoint position. However, because of cable
feathering and possible navigation obstacles, such as producing platforms and
military zones, the number of reflection points within a bin can vary significantly
from one bin to another. This variation degrades velocity analysis, stacking and
migration.
P32
Seismic Interpretation
Figure P-26
The best seismic processing results are obtained when the bins all contain
reflection points from near offsets and include the entire offset range from
predominantly the same sail line. In order to try and obtain a uniform number of
reflection points within each bin and to include a full range of offsets, the bin size
is often expanded or ‘flexed’ to incorporate traces occupying an adjacent bin.
Figure P-26 shows fifteen bins, which could be 25m x 12.5m in size, and the
possible inline and crossline expansion factors. It is usual to expand the bin in the
crossline direction only and maintain the original inline bin dimension.
Figure P-27(a)
P33
Seismic Interpretation
Figure P-27(b)
Figure P-27(a) shows the fold of coverage over a survey area using a static bin
size. In Figure P-27(b) the bins have been expanded in the crossline direction by a
factor of two, and this produces a more uniform fold within the bins.
If the seismic data were acquired with the source and receiver located in the
same surface position, the result would be a zero-offset, or normal incidence
section. In practice, it is necessary to acquire the data over a wide range of offsets
in order to stack the data and increase the signal to noise ratio. The resulting
stacked section is an approximation to a normal incidence section, the degree of
approximation being dependent on the geological complexity in the subsurface. In
order to stack the data it is necessary to correct all the events in the gather to their
zero-offset time. The time difference between the offset ray path and the normal
incidence ray path is called normal moveout and the correction of tx to t0 is the
normal moveout correction, Figure P-28.
P34
Seismic Interpretation
Figure P-28
For any observed travel time, tx and offset, a trial velocity is used in the travel
time equation to calculate the moveout t. This is applied to the gather. The aim is
to determine the velocity function which flattens all the primary events on the
gather so that all the traces in the gather can be stacked to produce a single output
trace. In this way, the primary events are enhanced while noise and multiples,
P35
Seismic Interpretation
which will have residual moveout, are degraded. Figure P-30 shows an example of
NMO corrected gathers using the standard two-term NMO equation.
Obviously the single-layer, constant velocity earth model shown in Figure P-28 is
far too simple and the next level of complication to employ is to assume that the
earth is a sequence of plain, horizontal layers. The travel time equation for this
case has an infinite number of terms, the first three of which are shown below (this
equation is derived in section 5.6.2).
P36
Seismic Interpretation
For offsets up to about the depth of interest the two-term equation, with the
velocity V assumed to be equivalent to V rms is sufficient to correct the moveout on
the gathers. For longer offsets, the three-term equation is necessary to flatten the
events at the farthest offsets. An example of using the three-term equation on the
same set of gathers displayed in Figure P-30, is shown in Figure P-31. When
comparing the two figures, it is clear that some events are flatter over the full
range of offsets when using the three-term NMO equation.
P37
Seismic Interpretation
However, not all events in Figure P-31 are truly flat. Some still show residual
moveout at the far offsets. This may be the result of anisotropy, which is a
commonly observed effect because the velocity of sound in a medium is
azimuthally dependent. As a first approximation, it is assumed that the earth is
vertically transversely isotropic, which means that acoustic energy travels faster in
the horizontal direction than in the vertical, but within the horizontal plane the
velocity is independent of azimuth. Seismic
energy travelling from the shots to the receivers has a horizontal component and
this component increases with offset. Figure P-32 shows the gathers corrected with
the three-term NMO equation and an additional term which assumes that the
horizontal seismic velocity is 10% faster that the vertical.
The travel time equation has to be modified in the presence of dipping layers by
replacing the velocity v by v/cos (Levin, 1971) and is written as
P38
Seismic Interpretation
In order to recover the RMS velocity v, the velocities used to stack the data in
the presence of dip must be divided by the cosine of that dip. This is done by
posting and contouring the rms velocities and then applying some form of
smoothing filter. The cosine correction is applied to these smoothed rms velocities.
It is usual to include DMO or dip moveout in the processing sequence, see section
3.1.12. This process removes the dip dependence of the stacking velocity and so the
cosine correction is not necessary in this case.
For a zero offset time of 1.0s and a velocity of 3000m/sec, the travel time at an
offset of 6000m is 2.236s. So t is 1.236s and an event at this offset has to be moved
by 1.236s to correct it to t0. If the wavelet is 60 msec long, it will be stretched over
this time gate after NMO and will therefore end up being much reduced in
frequency. The NMO stretch is given by:
For the above example, if the dominant frequency f is 25Hz, then f, the change
in frequency caused by the NMO correction, is 31Hz. This is greater than the
dominant frequency and so the wavelet will be stretched to a very low frequency
and will degrade the stack. For this reason, a mute is applied to zero any data that
will suffer an NMO stretch of more than about 40%. The resulting stack data will
therefore vary in fold over the length of the mute, from perhaps two or three fold
near time zero and increasing progressively to be full fold at the end of the mute.
The seismic section below, Figure P-33 shows the mute on the far left of the
section. The data are full fold where the mute stops at about 3.5s. The fold of the
P39
Seismic Interpretation
data gradually decreases from this time up to the surface, where it may be only
two or three fold, all the remaining traces in the gather having been removed by
the mute.
Figure P-33
The depth model is the same as that used in the acquisition example of shot –
receiver raytracing:
P40
Seismic Interpretation
P41
Seismic Interpretation
This type of raytracing models the CMP gathers that will be obtained from
collecting all the shots and receivers that have a common midpoint location.
P42
Seismic Interpretation
P43
Seismic Interpretation
With a plane horizontal layer the reflection point associated with a Common
Midpoint (CMP) gather occurs in the same location over all offsets, Figure P-34. If
the layer is dipping then the reflection point on the layer moves updip as the
offsets increase.
Figure P-34
Also, the stacking velocity for a dipping layer increases as 1/cosine of the dip. So
for the above example with a layer dipping at 18 0 the stacking velocity is
2103m/sec instead of 2000m/sec for the horizontal case. Obviously it is necessary to
correct for these effects so that we can stack the data at the correct velocity and
remove the reflector point dispersal and so produce a true zero-offset section. The
process that is designed to make these corrections is called Dip Moveout or DMO,
Figure P-35.
P44
Seismic Interpretation
Figure P-35
1) Velocity Analysis
7) Velocity Analysis
P45
Seismic Interpretation
P46
Seismic Interpretation
Figure P-37. The single offset plane shown in figure 35, but with DMO
applied.
P47
Seismic Interpretation
P48
Seismic Interpretation
With reference to Figure P-39, the time migration will move the reflection point
from the normal incidence position NI to the true image position I M vertically
above the reflection point R. Any one of the migration algorithms discussed under
the Migration section (3.1.17) is capable of achieving this, but the simplest
approach is to use a constant velocity migration, which will improve the velocity
analysis and can be easily removed by demigration after stack, in preparation for
a variable velocity migration.
Figure P-39
A possible processing flow to include PSTM would be the same as that for DMO
application above, up to stage 4 and then:
P49
Seismic Interpretation
8) Velocity Analysis
P50
Seismic Interpretation
Figure P-40
Figure P-41 shows the data after the application of Pre-Stack Time Migration
using a constant velocity. As an interesting comparison, Figures P-42 and P-43
show the same data set, but with the DMO and PSTM processes applied using a
single, variable velocity function, instead of a constant velocity. The PSTM results
with the variable velocity are superior to those with the constant velocity;
particularly in the deeper parts of the section around 4500ms, as one would
anticipate. In this instance, the constant velocity result was considered good
enough to improve the velocity picking and therefore the stack response, and it is
easier to demigrate a constant velocity migration than one that has used a variable
velocity. The demigration process is called ‘reverse time migration’.
P51
Seismic Interpretation
Figure P-41
P52
Seismic Interpretation
Figure P-42
P53
Seismic Interpretation
Figure P-43
The Kirchhoff summation method can be used to migrate the 3D data in one-
pass. The migration works on common-offset data and the image is formed by
summing weighted amplitudes along diffraction curves that are constructed using
the rms velocity from the diffraction travel time equation
P54
Seismic Interpretation
The algorithm works well in the presence of strong vertical velocity contrasts
and handles gradual lateral velocity variations by using the three-term moveout
equation, implying that the moveout under these conditions is still hyperbolic. In
fact, lateral velocity variations give rise to non-hyperbolic moveout and under
these circumstances prestack depth migration is required. So a judgment must be
made regarding the severity of the lateral velocity variation and whether
Kirchhoff PSTM will correctly image the data.
P55
Seismic Interpretation
Figure P-44
Figure P-45 shows the result of stacking the data with the original velocities and
this should be compared to the stack in Figure P-46 using the revised velocity field
derived from pre-stack time migrated data..
P56
Seismic Interpretation
Figure P-45
P57
Seismic Interpretation
Figure P-46
3.1.16 Stacking
The traces within a common midpoint gather are summed together to produce a
single stacked trace. All gathers are treated in this way and the result is a
migrated stack section but, unless a full 3D Kirchhoff PSTM has been applied,
with a sub-optimal migration applied prestack. Consequently, the data are
demigrated to remove the effects of the migration so that an optimal post-stack
processing sequence can be applied. The data in Figure P-47 has been pre-stack
time migrated using a constant velocity of 1600 m/s, the velocities have been
picked, the data stacked and then demigrated using the same constant velocity.
P58
Seismic Interpretation
Figure P-47
P59
Seismic Interpretation
P60
Seismic Interpretation
Noise attenuation, for example F-X filtering, can be applied after stack to
remove random noise in the data. The signal is assumed to be the correlatable
components in the data and the filter is applied in the F-X domain to remove those
frequency components that differ from those of the predicted signal. This is
sometimes called F-X deconvolution but it is not a deconvolution in the strict
sense, merely a prediction filter. This process has been applied to the data in
Figure P-46 and the result is shown in Figure P-48.
P61
Seismic Interpretation
Figure P-48
P62
Seismic Interpretation
Figure P-49
P63
Seismic Interpretation
3.1.19 Migration
Migration Formulae
Dips on the stack section are related to dips on the migrated data by
tan = sin
P64
Seismic Interpretation
The stack section represents a zero offset section, which is the section that would
have been recorded if the source and receiver were in the same surface location.
An alternative way to think about this is to consider the situation where the
sources are located on the reflector itself, with the receivers at the surface. When
the sources explode, the energy that arrives at the receivers first has left the
reflector normally because, by Fermat’s Principle, this is the shortest time path.
When using the exploding reflector model, the times are made correct by using
P65
Seismic Interpretation
half the layer velocity, since the energy has only travelled one-way to reach the
detectors on the surface. The exploding reflector principle is used in finite difference
migration algorithms.
Time Migration after Stack:- Used with low dips (may be <
150) and small lateral velocity contrasts. Precisely what constitutes ‘small’ depends
on the amount of refraction, and the associated inaccuracy in positioning of
reflectors, that the interpreter is prepared to ignore
Migration Algorithms
Migration algorithms are based on the one-way scalar wave equation and
assume that all the data presented to them are primary reflections. The three main
categories of algorithms are:
2) those based on the finite-difference solution of the scalar wave equation, and
P66
Seismic Interpretation
Kirchhoff Summation
This technique sums the amplitudes along a diffraction hyperbola whose shape is
controlled by the subsurface velocity specified. Corrections are made in order to
take into account spherical spreading, the variation of amplitude with reflection
angle and the phase shift associated with Huygens’ secondary sources. These
algorithms are generally good at handling dips up to 90 degrees but have
limitations in their use with lateral velocity variations.
The finite difference techniques are based on the fact that the stack section can
be modelled by the exploding reflector concept, described above. Migration is then
seen as downward continuation of the seismic wavefield followed by imaging. The
imaging condition is effected by setting t = 0 in the extrapolated wavefield. This
can be easily understood because at t = 0 the wavefront has not travelled any
distance and so is still at its origin on the reflector. It therefore has the shape of the
reflector.
Stolt Migration
Gazdag Migration
P67
Seismic Interpretation
An example of the effects of migration is given in Figure P-50. The input is the
data displayed in Figure P-48. These data have undergone two passes of
migration. Firstly, a 3D extended Stolt f-k migration was used in the inline
direction, followed by a finite difference migration in the xline direction.
Figure P-50
2D Migration
2D lines in the dip direction will experience the appropriate shifts in time and
space under migration. However, the strike lines, being flat, will not be shifted at
all. The consequence of this is that 2D lines no longer tie at intersections when they
have been migrated. The correct solution to migration of 2D data is to interpret
the stack data and use this to generate the time maps. These time maps then have
to be migrated and there are a number of applications on the market to do this.
They all work in a similar way, which is effectively to use profiles drawn
perpendicular to the contours. These profiles are then migrated, Figure P-51, and
used to annotate the map with the migrated times. They are contoured, using the
original map as a guide.
P68
Seismic Interpretation
P69
Seismic Interpretation
As a final process, the data are often filtered to remove high frequency noise and
scaling to give the data an amplitude balance over the complete time interval. The
data in the bottom section of Figure 49 have had applied a final filter and scaling
and the result is shown in Figure P-52.
Figure P-52
P70
Seismic Interpretation
P71
Seismic Interpretation
The raytraced migrated time section, showing the effects of refraction through
the model
The large amplitudes at the fault plane are spurious, presumably as a result of
the steep dip of the horizon here.
P72