Impact of Reprocessing Seismic Data

Advancements in processing and imaging techniques have continued over the last several decades, which have gradually improved the quality of the processed surface seismic data. Due to the evolution of processing algorithms, for example, from 1-D to 2-D to 3-D, processes such as deghosting, designature and demultiple (for offshore seismic data), as well as 5-D interpolation for regularization of data geometry, are now carried out more effectively, leading to more accurate preservation of the amplitudes. These algorithmic advancements have gone in parallel with advances in computer speed, first with faster chips, then with parallelization, and most recently with graphical processing units. Algorithms such as 3-D reverse-time migration that were once seen as too costly are now used routinely on deepwater surveys.

The quality of the processed seismic data depends on various factors, which include the geological conditions (such as the near-surface conditions, topography and lateral velocity changes in the overburden), the survey design (which may be suboptimum due to budget constraints), the quality of algorithms used for processing (where proprietary algorithms give some processors an edge over their competitors), and finally, the skills of the processing geophysicists. The turnaround time for a processing project should also be added to the list, as a rushed job will probably suffer in quality.

The ultimate objective of interpreting the seismic data with well, core and production data is to evaluate the existence of favorable prospects for oil and gas exploitation, or CO2 storage prospects in this case. When the quality of the existing seismic data is not adequate to perform an interpretation task reasonably, then the interpreter looks for other options. Is it feasible to acquire a new survey in the area? Seismic data acquisition comes with its own requirements, such as getting permits and survey restrictions. Will the turnaround time and cost of acquisition/processing provide a significantly improved interpretation product? The acquisition of a new 3-D survey might not be practical in areas where production of hydrocarbons is taking place due to the existing infrastructure. In other areas, urban development and more stringent environmental restrictions could make it impractical to acquire a new survey. In the absence of an improved survey, will reprocessing of seismic data be a good option?

These questions should be answered to see if reprocessing can be justified.

If reprocessing is an option, other considerations could include drawing up an effective workflow, the use of appropriate algorithms, inclusion of the state-of-the-art techniques such as full waveform inversion, which could make a difference to the bottom line. Merging data from different vintages could be evaluated and discussed in the interest of achieving greater reliability for the end-product in its interpretation. An important component in the workflow is interaction between the processor and the interpreter, as well as the quality control that needs to be carried out. Such efforts are bound to bear fruit in adding interpretation value.

The intent or motivation for reprocessing of a legacy seismic volume or a seismic volume that was processed a few years ago could also be necessitated if some fresh information has come to light from, say the drilling of a well over the survey, so that that a deeper target has come to light and needs to be explored.

The existing case studies on reprocessing illustrate the advantages in terms of the uplift in the data quality seen in gathers and stacked images, velocity analysis and preservation of amplitudes, which allows the option of using amplitude versus offset attributes. In areas with significant lateral velocity variations, defining a more detailed velocity analysis on a narrower grid spacing is one of the more important, though time-consuming processing steps to improve the data quality. One major advantage that reprocessing has over processing the original data is that the interpreter has significant insight into the subsurface geology. The interpreter will want the processor to fix bad well ties, minimize events that are now known to be interbed multiples, and sharpen chaotic features that the interpreter now knows to be complex geology rather than seismic noise. With this additional focus and care, reprocessing can illuminate both stratigraphic and structural features not visible before. The reprocessing cost and time is usually much less than the cost of acquisition of a fresh survey and its processing.

Image Caption

Figure 1: Location maps of (a) Havnsø prospect outlined, and (b) zoom in at the Stenlille area showing location of wells and the limits of the 3-D seismic survey. Many wells have approximately the same position and plot on top of each other on the map. Courtesy of Google Maps. (After Bredesen et al., 2021)

Please log in to read the full article

Advancements in processing and imaging techniques have continued over the last several decades, which have gradually improved the quality of the processed surface seismic data. Due to the evolution of processing algorithms, for example, from 1-D to 2-D to 3-D, processes such as deghosting, designature and demultiple (for offshore seismic data), as well as 5-D interpolation for regularization of data geometry, are now carried out more effectively, leading to more accurate preservation of the amplitudes. These algorithmic advancements have gone in parallel with advances in computer speed, first with faster chips, then with parallelization, and most recently with graphical processing units. Algorithms such as 3-D reverse-time migration that were once seen as too costly are now used routinely on deepwater surveys.

The quality of the processed seismic data depends on various factors, which include the geological conditions (such as the near-surface conditions, topography and lateral velocity changes in the overburden), the survey design (which may be suboptimum due to budget constraints), the quality of algorithms used for processing (where proprietary algorithms give some processors an edge over their competitors), and finally, the skills of the processing geophysicists. The turnaround time for a processing project should also be added to the list, as a rushed job will probably suffer in quality.

The ultimate objective of interpreting the seismic data with well, core and production data is to evaluate the existence of favorable prospects for oil and gas exploitation, or CO2 storage prospects in this case. When the quality of the existing seismic data is not adequate to perform an interpretation task reasonably, then the interpreter looks for other options. Is it feasible to acquire a new survey in the area? Seismic data acquisition comes with its own requirements, such as getting permits and survey restrictions. Will the turnaround time and cost of acquisition/processing provide a significantly improved interpretation product? The acquisition of a new 3-D survey might not be practical in areas where production of hydrocarbons is taking place due to the existing infrastructure. In other areas, urban development and more stringent environmental restrictions could make it impractical to acquire a new survey. In the absence of an improved survey, will reprocessing of seismic data be a good option?

These questions should be answered to see if reprocessing can be justified.

If reprocessing is an option, other considerations could include drawing up an effective workflow, the use of appropriate algorithms, inclusion of the state-of-the-art techniques such as full waveform inversion, which could make a difference to the bottom line. Merging data from different vintages could be evaluated and discussed in the interest of achieving greater reliability for the end-product in its interpretation. An important component in the workflow is interaction between the processor and the interpreter, as well as the quality control that needs to be carried out. Such efforts are bound to bear fruit in adding interpretation value.

The intent or motivation for reprocessing of a legacy seismic volume or a seismic volume that was processed a few years ago could also be necessitated if some fresh information has come to light from, say the drilling of a well over the survey, so that that a deeper target has come to light and needs to be explored.

The existing case studies on reprocessing illustrate the advantages in terms of the uplift in the data quality seen in gathers and stacked images, velocity analysis and preservation of amplitudes, which allows the option of using amplitude versus offset attributes. In areas with significant lateral velocity variations, defining a more detailed velocity analysis on a narrower grid spacing is one of the more important, though time-consuming processing steps to improve the data quality. One major advantage that reprocessing has over processing the original data is that the interpreter has significant insight into the subsurface geology. The interpreter will want the processor to fix bad well ties, minimize events that are now known to be interbed multiples, and sharpen chaotic features that the interpreter now knows to be complex geology rather than seismic noise. With this additional focus and care, reprocessing can illuminate both stratigraphic and structural features not visible before. The reprocessing cost and time is usually much less than the cost of acquisition of a fresh survey and its processing.

Reprocessing of seismic data is not a new idea. It has been done since the 1980s. But as already mentioned, due to the faster evolution of algorithms and computer power enhancement, it has become more common, and rightly so.

Reprocessing a 3-D Survey from Denmark

In Denmark, there are two underground gas storage facilities that serve as a buffer for supply of gas from the North Sea. One of these is a deep aquifer at a depth of 1,500 meters near Stenlille (figure 1). Natural gas has been injected and stored at Stenlille since 1989, where the reservoir occurs within a domal subsurface structure. There are 20 wells drilled over the Stenlille 3-D area, of which fourteen are operational gas injection and withdrawal wells (including ST-2 and ST-19), and six are observation wells (ST-3, 4, 5, 6, 10, and 15) for monitoring pressure in the aquifer around the reservoir and in the caprock. The gas is stored in two separated units within the Gassum Formation, namely, Zone 1-3 operates as one integrated unit and is located approximately 22-24 meters below the Gassum top marker, and Zone 5, which is located close to 38-40 meters below it. These time offsets are based on interpretation of the vintage dataset and are slightly different for the reprocessed dataset.

In the April 2022 installment of Geophysical Corner, we discussed a reservoir characterization exercise applied to a 3-D seismic data volume shot over the natural gas storage structure at Stenlille in 1997. At the time, only stacked seismic data volume was available. However, since then the field data for the 3-D seismic survey was retrieved and reprocessing was completed in early 2023.

As shown to the north in figure 1, the Havnsø anticlinal structure in Denmark is a prospective CO2 storage site due to its proximity to two large emission sources – a coal-fired power station and a nearby refinery. There are no wells or 3-D seismic data acquired yet over the Havnsø structure. Some 2-D seismic profiles have been acquired recently that extend from the Stenlille site all the way up north over the Havnsø structure. The only tie points for these 2-D seismic lines are the Stenlille 3-D and the well data existing over it. Besides, the geological understanding from the Stenlille site will be very important for seismic interpretations going north toward the Havnsø structure, which is another motivation for reprocessing of the Stenlille 3-D seismic data and enhance our understanding of the Gassum Formation.

The quality of seismic data after reprocessing appears to be significantly improved with the added advantage that pre-stack seismic data can be utilized for simultaneous inversion and determination of rock-physics parameters. Here we present a comparison of the legacy and reprocessed data in terms of seismic displays as well as the poststack attributes generated on both versions of the data in a qualitative way. We intend to make a similar comparison of the prestack attribute generation results in a future article.

The available legacy 16-fold 3-D seismic data covers 56.4 square kilometers and has a relatively low signal-to-noise ratio. The acquisition parameters included 40 meters for source and receiver intervals, 200 meters for receiver line spacing, 360 meters source line spacing, maximum offset as 3,236 meters, 2 milliseconds sample interval, 3 seconds record length (which yielded a bin size of 20 meters × 20 meters). Vibroseis was used as the seismic source (with sweeps of 20 seconds) as the location of the survey is an urban setting. The processing of this large data volume was completed with Kirchhoff prestack time migration on gathers with 5-D Fourier regularization and the stacking the data.

Figure 2 shows a comparison of an inline section from both the legacy and reprocessed seismic data volumes. Lower signal-to-noise ratio is seen on the section in figure 1a, and the two frequency spectra shown alongside the sections indicate a somewhat wider bandwidth for the reprocessed seismic data. For example, shallow reflection events become much more laterally consistent and continuous, due to utilizing a 5-D Fourier regularization technique that interpolates a spatially irregular dataset, such that the impact of data gaps are reduced.

Coherence and Curvature Attribute Comparison

Multispectral energy ratio coherence attribute was generated on the two datasets after preconditioning them with structure-oriented filtering. Applications of multispectral coherence were discussed in the July 2018 Geophysical Corner. Figure 3 shows a comparison of equivalent stratal slices extracted from the multispectral energy ratio coherence volumes generated on the legacy and reprocessed seismic volumes. Notice the more focused and crisp lineament definition of the lineaments on the display in figure 3b. It should be mentioned that a time difference of 8 milliseconds was noticed between the legacy seismic volume and its reprocessed version, which was accounted for while generating the horizon slices shown in figures 3 to 7.

Similarly, most-positive curvature (short-wavelength) attribute was computed for both datasets and a comparison of equivalent stratal slices extracted from the two curvature volumes is shown in figure 4. Notice the better-defined lineaments on the display in figure 4b.

Seismic Facies Classification Using Unsupervised Machine Learning Tools

Specifically, the attributes used for the computation of seismic facies classification using some of the unsupervised machine learning methods were the relative acoustic impedance, sweetness, GLCM entropy, total energy, curvedness and spectral magnitudes at 30, 50 and 80 hertz.

All these different attributes have been generated on both the legacy seismic and the reprocessed seismic versions to use them as input for seismic facies classification using two unsupervised machine learning techniques, namely self-organizing mapping and generative topographic mapping.

The following suite of attributes was generated for both versions of the seismic data.

  • Sweetness is a “meta-attribute,” or one computed from others, which in this case is the ratio of the envelope to the square root of the instantaneous frequency. A clean sand embedded in a shale will exhibit high envelope and lower instantaneous frequency, and thus higher sweetness, than the surrounding shale-on-shale reflections.
  • GLCM (grey-level co-occurrence matrix) entropy is a measure of disorder or complexity in the data. If the reflectivity along a horizon is smoothly varying, it will exhibit low GLCM entropy.
  • Spectral magnitude: The magnitude of each spectral component ranging from 20 to 100 hertz, which is the effective bandwidth of both the seismic data volumes. Spectral components corresponding to 30, 50 and 80 hertz were used in the multiattribute analysis.
  • Relative acoustic impedance is computed by continuous integration of the original seismic trace with the subsequent application of low-cut filter. The impedance transformation of seismic amplitudes enables the transition from reflection interface to interval properties of the data, without the requirement of a low-frequency model.
  • The total energy attribute helps isolate low energy chaotic reflectors from higher energy seismic responses.
  • Curvedness defines the magnitude of structural or stratigraphic reflector deformation.

Self-Organizing Maps

SOM is an unsupervised machine learning technique based on the clustering approach that generates a seismic facies map from multiple seismic attributes. SOM might be considered as projection from a multidimensional attribute space to a 2-D space or “latent” (hidden) space. Usually, the output from SOM computation is obtained in the form of two projections on the two SOM axes, which can then be directly crossplotted and displayed using a 2-D RGB color bar. More details can be picked up from the November 2020, January 2022 and October 2023 installments of Geophysical Corner.

Figure 5 shows a comparison of the equivalent stratal displays corresponding to Zone 5 reservoir level extracted from the SOM crossplot volume computed for the legacy and reprocessed versions of the seismic data, using a 2-D color bar. Some of the clusters seen on the display in figure 5b are better defined than the ones shown in figure 5a.

Generative Topographic Mapping

Essentially, GTM technique projects data from a higher dimensional space (7-D when seven attributes are used) to a lower 2-D dimensional deformed surface onto which an interpreter can draw polygons to form geobodies or project against a 2-D color bar for better visualization. Again, more details can be picked up from the November 2020, January 2022 and October 2023 installments of Geophysical Corner.

Figure 6 shows a comparison of the equivalent stratal displays corresponding to Zone 5 reservoir level extracted from the GTM crossplot volume computed for the legacy and reprocessed versions of the seismic data, using a 2-D color bar. The seismic facies exhibited in different colours as seen on the display in figure 6b have a better-defined distribution and distinct definition than the display shown in figure 6a. These GTM displays are found to be superior to the SOM displays shown in figure 5.

Finally, in figure 7 we show a stratal slice display equivalent to the display in figure 6b extracted from the GTM crossplot volume and corendered with multispectral energy ratio coherence using transparency. Such a display shows the individual seismic facies truncated by the lineaments corresponding to faults defining separate compartments and could prove to be useful. A traditional way of carrying out interpretation would be adopting spectral decomposition and merging some of the relevant frequency volumes using RGB color blending. In figure 7b, we exhibit an equivalent stratal slice generated by RGB blending the 30, 50 and 70 hertz frequency volumes. Notice the superior definition of not only the gas anomaly that stands out well on the GTM displays, but the superior definition of the seismic facies seen on these displays as compared with the RGB blended display. The spectral decomposition utilized a matching pursuit algorithm for computation of the frequency volumes.

Conclusions

We have found that reprocessed legacy seismic data when used for attribute generation and further used in some of the multiattribute processes discussed here can significantly improve interpretation accuracy. Results obtained for the unsupervised machine learning applications employing attributes generated from both vintage and reprocessed seismic data depict superior performance of the latter in terms of clarity of clusters as well as color variations within them, probably in response to the expected geologic variations.

After applications of SOM and GTM techniques to the two data volumes we found that GTM has an edge over SOM in terms of the detailed distribution of seismic facies in terms of better resolution and distinct definition of the geologic features seen on the displays.

The seismic facies maps in the zones of interest are to be calibrated with the lithofacies information obtained from well cores and cuttings, for more accurate interpretation. Such detailed work lends confidence in the facies analysis carried out.

Finally, we believe that reprocessing has helped achieve more interpretation detail that has led to accurate reservoir characterization and fault mapping in the zone of interest, i.e., the zones encompassing the storage reservoir. Such information has helped understand the reservoir better, which can have a bearing on the Stenlille CO2 storage and retrieval being done in the different zones. The enhanced reservoir characterization analysis of the existing natural gas storage at Stenlille is expected to provide insight into the proposed CO2 storage at Havnsø.

Acknowledgements

We thank the Geological Survey of Denmark and Greenland for making the seismic data available for the study presented in this article. The first author would like to thank the Attribute-Assisted Seismic Processing and Interpretation Consortium, University of Oklahoma, for access to their software, which has been used for all attribute computation shown here.

You may also be interested in ...