Want updates? Opt out any time

News
« 2012 retrospective | Main | The digital well scorecard »
Thursday
Dec132012

Ten ways to spot pseudogeophysics

Geophysicists often try to predict rock properties using seismic attributes — an inverse problem. It is difficult and can be complicated. It can seem like black magic, or at least a black box. They can pull the wool over their own eyes in the process, so don’t be surprised if it seems like they are trying to pull the wool over yours. Instead, ask a lot of questions.

Questions to ask

  1. What is the reliability of the logs that are inputs to the prediction? Ask about hole quality and log editing.
  2. What about the the seismic data? Ask about signal:noise, multiples, bandwidth, resolution limits, polarity, maximum offset angle (for AVO studies), and processing flow (e.g. Emsley, 2012).
  3. What is the quality of the well ties? Is the correlation good enough for the proposed application?
  4. Is there any physical reason why the seismic attribute should predict the proposed rock property? Was this explained to you? Were you convinced?
  5. Is the proposed attribute redundant (sensu Barnes, 2007)? Does it really give better results than a less sexy approach? I’ve seen 5-minute trace integration outperform month-long AVO inversions (Hall et al. 2006).
  6. What are the caveats and uncertainties in the analysis? Is there a quantitative, preferably Bayesian, treatment of the reliability of the predictions being made? Ask about the probability of a prediction being wrong.
  7. Is there a convincing relationship between the rock property (shear impedance, say) and some geologically interesting characteristic that you actually make decisions with, e.g. frackability.
  8. Is there a convincing relationship between the rock property and the seismic attribute at the wells? In other words, does the attribute actually correlate with the property where we have data?
  9. What does the low-frequency model look like? How was it made? Its maximum frequency should be about the same as the seismic data's minimum, no more.
  10. Does the geophysicist compute errors from the training error or the validation error? Training errors are not helpful because they beg the question by comparing the input training data to the result you get when you use those very data in the model. Funnily enough, most geophysicists like to show the training error (right), but if the model is over-fit then of course it will predict very nicely at the well! But it's the reliability away from the wells we are interested in, so we should examine the error we get when we pretend the well isn't there. I prefer this to witholding 'blind' wells from the modeling — you should use all the data. 

Lastly, it might seem harsh but we could also ask if the geophysicist has a direct financial interest in convincing you that their attribute is sound, as well as the normal direct professional interest. It’s not a problem if they do, but be on your guard — people who are selling things are especially prone to bias. It's unavoidable.

What do you think? Are you bamboozled by the way geophysicists describe their predictions?

References
Barnes, A (2007). Redundant and useless seismic attributes. Geophysics 72 (3), p P33–P38. DOI: 10.1190/1.2370420.
Emsley, D. Know your processing flow. In: Hall & Bianco, eds, 52 Things You Should Know About Geophysics. Agile Libre, 2012. 
Hall, M, B Roy, and P Anno (2006). Assessing the success of pre-stack inversion in a heavy oil reservoir: Lower Cretaceous McMurray Formation at Surmont. Canadian Society of Exploration Geophysicists National Convention, Calgary, Canada, May 2006. 

The image of the training error plot — showing predicted logs in red against input logs — is from Hampson–Russell's excellent EMERGE software. I'm claiming the use of the copyrighted image is fair use.  

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (6)

The reasons stated here are good arguments for non-linear solutions which produce the best answer attainable under the noise circumstances.

December 13, 2012 | Unregistered CommenterDavid Paige

Hi Matt,
Another point you could add to the list which is very helpful.

Ask for synthetic seismic generated from the inversion results - How good is the match to the real seismic? Do they predict the correct AVO etc etc.

If we can't get a good match between synthetic and real seismic away from the wells, then we can't hope to relate the inversion products to the rock properties we are after.

Cheers, MattS

December 15, 2012 | Unregistered CommenterMatt Saul

Hi Matt

Nice post. It will go on my wall right next to the interpreter's canon.

Question on point 3. How do you go about good enough correlation? Is there any way to define an objective cutoff according to the intended application?
For post stack inversion I usually define a cutoff for correlation calculated on a window larger than the area of interest - absolutely no stretch / squeeze. Then I rank survivors based on correlation on a far narrower window around the reservoir. My cutoff is very subjectively defined, and yet that's the best thing I can come up with on my own.

Any thoughts?

December 15, 2012 | Unregistered CommenterMatteo

@Matt: I like that — completing the modeling circle, as it were. Forward modeling geological models, inversions, geocellular models, reservoir simulations, etc — the workflow should be de rigueur in our business, but unfortunately it is not. Reminds me too of Lomask et al.'s beautiful forward models of the Earthscape experiments at St Anthony Falls (link opens PDF).

@Matteo: Good question... what is good enough? I guess it depends a bit on the scale of the investigation. When I worked in the McMurray of northern Alberta, we had a 1 ms sample interval and were looking for very thin shales, so we would have liked very precise ties... however the in-zone ties were poor (or unknown) because seismic waves do weird things we don't fully understand in bitumen (a pitfall for Matt's modeling idea).

I do like the idea of thresholding on the gross tie, before ranking on the tie in the zone, but then of course you have to decide on the cutoffs. Another approach is to admit everything, and just adjust the reliability — poor ties will presumably have low reliability so will be weakly weighted in the application of the result to the geomodel (say). A completely empirical approach.

December 16, 2012 | Registered CommenterMatt Hall

Hi Matt


Re: stratigraphic attributes: I just downloaded the paper, they look promising, thanks for the link. I remember seeing some of Chopra and Marfur's papers.

Re: reliability. I like your idea. One could distill correlation coefficient into an uncertainty multiplier.
Also, I remembered something the Hampson Russell folks do.I can't recall if I saw this in a Hampson Russell course, presentation, paper, or discussed at the convention. The gist of it is you get an unfiltered version of the initial model and create stratal slices through it. If there are poor ties they should show up on slices as bull's eyes.

December 18, 2012 | Unregistered Commentermatteo

Matt, nicely put! A very good check-list.

As a side line to it, some time ago I came across a rather old paper where the author said something like: 'it was easier to look at inverted volumes rather than seismic amplitudes...' and I guess the reasons should be obvious for most. How wrong he was! It seems as though, many geophysicists (sadly I can not count myself in that lot) have developed over the years the ability to do real-time deconvolution in their heads while keeping track of changes in elastic properties across an interface at an incidence angle...and why not if after all the earth is not more than a half-space cartoon! We have all been to outcrops and saw it many times!

So, often I come across rather skilful interpreters saying they prefer to use seismic amplitudes, phase-rotated volumes or even 'fancy' volumes rotated along principal components for property prediction and not just for horizon interpretation!

Now, what I really find fascinating is that those attributes are almost always accepted without asking any question (because it is fast, software-X has a tool to do it and we have been using it for 100's of years) and when it fails we add the case to the black book of pitfalls! I wonder why we are not using a similar list of QC's (as in your initial post) more often?

Is it because there are indeed a bunch of people that can really do deconvolution in their heads in real time? or is it because they are somehow convinced they can do it whereas in fact they have not got a clue?

Cheers

December 20, 2012 | Unregistered CommenterIgor Escobar

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>