01 March, 2010

'Conversation' Without Velocity

Do you speak ‘seismic?’

 

Chat room redux: New technology and research efforts are making it possible for seismic data to “talk” to seismic data. And the stories they can tell…

Arthur Weglein

Arthur Weglein
Arthur Weglein

Seismic data processing technology has progressed markedly over the past decade or so.

In fact, the research folks sometimes come up with a new technology that doesn’t just push the envelope, it appears to blast all the way through.

“Talking seismic data” no doubt fall into this category.

Using new seismic methods, the processor or interpreter decides on the topic for the data to “talk” about and then instructs them to talk to one another, staying focused on a specific seismic processing goal until the data conversation delivers the specific processing objective.

This is no joke.

In fact, this technology already has resulted in forms of coherent noise removal (for example, removing free surface and internal multiply reflected events) that require no subsurface information and are now widely used within the petroleum industry.

They are particularly effective in complex geologic situations, such as subsalt plays in the Gulf of Mexico, offshore Brazil and the Red Sea.

The thrust now is to extend that earlier noise removal capability to the extraction of useful subsurface information from signal.

Researchers have developed and are preparing to field test a new imaging method that enables seismic events arriving at the recorder to “converse” with each other to reveal a raft of critical information beneath the earth’s surface.

Included among the goals are depth imaging, target delineation and Q compensation – each using a distinct data conversation that focuses on one of these goals.

The added kicker is this can be accomplished without any firsthand knowledge of the earth and can be achieved directly and without any indirect expression of a need for subsurface information through a proxy – velocity and other subsurface data are unnecessary.

This near-mystical-sounding development stems from the Mission-Oriented Seismic Research Program (M-OSRP) established in 2001 at the University of Houston (UH). The program is supported by more than a dozen major oil and industry service companies.

The M-OSRP functions under the leadership of its founder, Arthur Weglein, who is the Hugh Roy and Lillie Cranz Cullen Distinguished Professor of Physics at UH.

The research effort is complex, but the goal is defined succinctly: “We want to make the currently inaccessible petroleum target accessible,” Weglein said, “and the accessible target better defined.”

Coming and Going

When a seismic source sends a signal into the earth during the data acquisition phase, it continues traveling until it hits an interface, when a part of it is reflected back to the seismic recorder. The larger the contrast in properties at the interface, the larger the amplitude or size of the reflection. The time of the arrival reveals how long the round trip required.

“We classify events by whether they go straight down and back up, which we call a primary,” Weglein said, “or if a wave bounced around a bit and then comes back up we call it a multiply-reflected event, or a multiple.

“You want to get rid of multiples because they hit too many reflectors, and you can’t decipher and isolate the encoded information within the multiply-reflected event’s complicated history,” Weglein noted. “We’ve become known for getting rid of the multiples – without knowing anything about the earth.”

Here’s the blueprint.

Historically, the only way to know if the signal went down and straight back, or whether it has bounced around and hit multiple reflectors prior to returning is to know the earth, particularly to be able to determine velocity of the signal as it traversed the subsurface.

“If we mathematically make events talk to each other in a math-physics sense, we set up a math-physics conversation where we get them to talk together, thereby getting data to cooperate and participate in reaching seismic processing goals,” Weglein said. “Without that cooperation all seismic processing methods – for example, for multiple removal or depth imaging – require subsurface information to reach the same end.

“By getting events to cooperate and communicate with each other with a certain conversation, they tell us which events are down and back, i.e., primaries, and which are multiples – without our knowing the earth,” he said.

“The inverse scattering series, or ISS, is a math-physics program we’ve developed that allows that kind of communication between events for different seismic purposes,” Weglein continued. “The ISS has the unique ability to achieve all processing goals directly and in precisely the same manner that free surface multiples are removed, i.e., without subsurface information.

“The methods we originally developed 20 years ago for removing multiples were highly controversial and radical when we first introduced them, due to their claim of not needing any subsurface information,” Weglein continued. “However, our earlier (so-called) radical ideas for removing all multiples have now become fully mainstream and are in widespread industry use worldwide.

“Now we are focused on primaries and target information extraction,” he said, “and we now claim we can directly determine the depth of the target without any need for a velocity model – that’s the current controversial and radical thought.”

Complex Challenges

Ordinarily, when working with an individual primary event, the questions become:

  • How deep in the earth did the down-going wave encounter a reflector?
  • What did it experience at that depth?
  • Is what resides at the reflector something that interests the petroleum industry?

In a simple homogeneous geologic setting where the wave velocity may be known to be, say, 60 mph, and the wave makes a round trip into the subsurface and back in one hour, determining the depth of the reflector to be 30 miles is a slam-dunk.

Venture out to the deepwater Gulf of Mexico and such simplicity disappears.

“The problem with current imaging in the deepwater Gulf of Mexico underneath the salt in the subsalt environment is we can’t often enough and accurately enough figure out the velocity above the target, because the salt is very complex,” Weglein said. “They can’t get that 60 mph, so to speak.

“If I have a top salt primary, or bounce, and a primary from bottom salt and the subsalt target primary and I know the velocity experienced to reach each of these, then I could figure the depth,” Weglein said. “But all too often I can’t because it’s a very complicated problem.

“We have a roughly 90 percent failure rate in the deepwater Gulf of Mexico drilling, with 25 percent not even reaching the target,” Weglein said. “At $150 million per exploration well, and the pressure to develop fields with fewer wells, this confluence of technical difficulty, drilling hazards and costs is a pressing challenge for the petroleum industry.

“If we were more effective in determining velocity and acquiring images under salt we would have a higher percent success in drilling,” he said.

Well, you say, maybe people just need more data and more computer speed.

Won’t work.

“Collecting more data and acquiring faster computers is important and useful, but they do not recognize a key underlying problem behind drilling dry holes – and hence, by themselves, do not represent a comprehensive solution and response,” he said.

“What’s missing and what’s wrong is what we call a breakdown or violation of algorithmic assumptions, violations not caused by limited data or computers,” Weglein noted. “Because the current ability to find velocity fails under complex geology, we’ve been looking for a method to find depth without needing velocity, aiming to locate and delineate target reservoirs without having to know anything above it.”

Up to Speed

To overcome a problem in determining velocity, Weglein cited two approaches:

  • Find a new, improved way to find the velocity, and then you can use current imaging methods that depend on having an accurate velocity model. However, there’s no candidate method or concept today with that improved velocity promise or potential.
  • Find a totally new imaging method that doesn’t need velocity either directly or indirectly, which is what the M-OSRP is doing.

“These primaries at the top of the salt, bottom of salt and subsalt target have to have a conversation – a math-physics conversation,” Weglein emphasized. “There’s a certain math-physics communication that occurs that ISS allows that will output the depth without the velocity.

“If you allow all those primaries, or single bounces, from top salt, base salt and subsalt target to communicate with each other, then they will locate where each of their reflectors are,” Weglein said, “without needing in principle or practice to know velocity or anything about the earth – we’re after this game-changing new imaging capability to make currently inaccessible targets accessible.”

He emphasized that this new target location and imaging capability applies to complex geology other than subsalt and to shallow water environments as well as deep. Additionally, it applies to onshore challenges for removing internal multiples and depth imaging.

The first field test of the ISS imaging theory is scheduled within a year, and likely will occur in the deepwater Gulf of Mexico. Actually, this will be a sequence of tests, of increasing levels of difficulty, which will kick off by addressing an imaging challenge, e.g., a fault shadow zone, and then moving forward in stages to the more difficult challenges, enabling the M-OSRP team to build its imaging experience on field data.

“The M-OSRP program clearly indicates that the petroleum industry will support fundamental high impact, potentially game-changing research,” Weglein said, “if you can describe to the petroleum sponsors what benefits would derive, and be delivered, if we are successful – and in terms that make sense to them.”