imaging has changed radically over the last 80 years and has become
a billion dollar business. Recording systems with thousands of channels
and fleets of vibrators operating in tandem and helicopter-supported
field operations are commonplace.
are environments where such modern systems are too expensive to
operate, or where environmental or community concerns prevent the
use of heavy machinery.
are problems in today's oilfield that reflection seismic cannot
seismic, which is seismic imaging without sources!
moment. Surely, one has to have some sort of energy source. Right?
say that it is seismic imaging using sources of opportunity rather
than the standard airguns, vibrators or dynamite. A passive seismic
crew merely lays out an array of receivers and ... listens. They
are listening for earthquakes and microseisms -- some naturally
produced and some the result of production activity, but all useful
to create an image of what's going on in the subsurface.
two distinct branches of passive seismic.
Passive seismic transmission tomography creates 3-D images using
the observed travel time of seismic signals originating from micro-earthquakes
occurring below the target.
setup is illustrated in figure 1. A sparse array of independent
seismographs is established above the target. The array usually
consists of 20 to 100 stations, each recording the output of a three-component
geophone. Typical imaging areas for such an array are 300 to 1,500
phones are placed 10 to 30 meters below the surface to get away
from the noisy surface environment. The stations may store their
data locally, but often are linked to the processing center by some
form of telemetry.
array in place, the survey proceeds by simply listening.
an initial velocity model, the observed micro-earthquakes are located
in time and space using long-standing location algorithms based
upon picks of the P and S arrival times at each observation station.
Once a number of events has been located one flips the process,
assumes the origin time and hypo-centers of the events are known
and uses some form of travel time inversion to estimate a new velocity
nature of the observations allows for estimation of the Vp and the
Vs velocity structures. As more events are added to the dataset,
finer estimates of the velocity structure can be achieved.
us are surprised to find that there are enough micro-earthquakes
occurring to make this a viable tool. We are conditioned to think
of earthquakes in terms of life-threatening, concrete-crushing events
that happen only rarely. Such events have a local magnitude of 3
or greater. Earthquakes are observed to be log-normally distributed
to their magnitude. This means that there will be, on a statistical
basis, 10 times as many magnitude 2 earthquakes as magnitude 3,
and 10 times as many magnitude 1 as 2, and so on. The micro-earthquakes
used for passive seismic transmission tomography are typically all
those down to magnitude 0, or even smaller.
processing of the data proceeds in near real time, it is possible
to monitor the effectiveness of the survey and cease field operations
when the particular needs and resolution of the survey have been
met. Survey times of six to 12 months are to be expected.
when does such an approach to imaging become cost effective?
in flat, open country a more conventional reflection survey is probably
a better solution. But in mountainous terrain, passive can be as
much as an order of magnitude less expensive. In environmentally
sensitive areas the benign environmental impact of passive means
that a survey that might otherwise never get permitted becomes possible.
cultured areas, the low impact of passive seismic again makes the
likelihood of obtaining permits much higher.
With passive seismic emission tomography the micro-seismic activity
itself becomes the imaging target. The most straight-forward approach
is to observe and record the direct arrivals of the seismic waves
from these events and to map the distribution of hypocenter locations.
most part the events being considered here are small, with local
magnitudes in the range -1 to -3, and rarely discernable as clean
first breaks on surface recordings. Consequently, much of the work
in this domain uses borehole receivers.
the more common applications of emission tomography is hydraulic
fracture monitoring. Typically an array of eight to 12 three-component
geophones is clamped at or just above the reservoir level in a wellbore
near the well where the fracturing will occur. First break picks
are made of the observed events. A mapping of the event locations
over time mirrors the development of fracturing. Often these results
are presented as movies that nicely reflect the process' dynamic
of observation wells and the limitations on observation distance
(usually 1,000 meters or less) are serious impediments to the widespread
usefulness of this downhole methodology.
approach to emission tomography is illustrated in figure 2. Here
an array of geophones is deployed on the surface, typically with
40 to 100 stations distributed over a few square kilometers.
is sequentially beam-steered at all points in the subsurface and
a 3-D map of emission energy is made, which reflects much of the
same information as the hypocenter location map obtained with the
to monitor dynamic processes in real time presents many opportunities.
These include fracture monitoring, mapping of fault creep and compaction
and tracking of injected fluids. In a very real way we are putting
a stethoscope on the chest of the earth and listening.
is that we don't have a lot of experience to draw upon with which
to interpret these sounds.