Want updates? Opt out any time

Connect
News
Blogroll

Thursday
Dec132012

## Ten ways to spot pseudogeophysics

Geophysicists often try to predict rock properties using seismic attributes — an inverse problem. It is difficult and can be complicated. It can seem like black magic, or at least a black box. They can pull the wool over their own eyes in the process, so don’t be surprised if it seems like they are trying to pull the wool over yours. Instead, ask a lot of questions.

1. What is the reliability of the logs that are inputs to the prediction? Ask about hole quality and log editing.
2. What about the the seismic data? Ask about signal:noise, multiples, bandwidth, resolution limits, polarity, maximum offset angle (for AVO studies), and processing flow (e.g. Emsley, 2012).
3. What is the quality of the well ties? Is the correlation good enough for the proposed application?
4. Is there any physical reason why the seismic attribute should predict the proposed rock property? Was this explained to you? Were you convinced?
5. Is the proposed attribute redundant (sensu Barnes, 2007)? Does it really give better results than a less sexy approach? I’ve seen 5-minute trace integration outperform month-long AVO inversions (Hall et al. 2006).
6. What are the caveats and uncertainties in the analysis? Is there a quantitative, preferably Bayesian, treatment of the reliability of the predictions being made? Ask about the probability of a prediction being wrong.
7. Is there a convincing relationship between the rock property (shear impedance, say) and some geologically interesting characteristic that you actually make decisions with, e.g. frackability.
8. Is there a convincing relationship between the rock property and the seismic attribute at the wells? In other words, does the attribute actually correlate with the property where we have data?
9. What does the low-frequency model look like? How was it made? Its maximum frequency should be about the same as the seismic data's minimum, no more.
10. Does the geophysicist compute errors from the training error or the validation error? Training errors are not helpful because they beg the question by comparing the input training data to the result you get when you use those very data in the model. Funnily enough, most geophysicists like to show the training error (right), but if the model is over-fit then of course it will predict very nicely at the well! But it's the reliability away from the wells we are interested in, so we should examine the error we get when we pretend the well isn't there. I prefer this to witholding 'blind' wells from the modeling — you should use all the data.

Lastly, it might seem harsh but we could also ask if the geophysicist has a direct financial interest in convincing you that their attribute is sound, as well as the normal direct professional interest. It’s not a problem if they do, but be on your guard — people who are selling things are especially prone to bias. It's unavoidable.

What do you think? Are you bamboozled by the way geophysicists describe their predictions?

References
Barnes, A (2007). Redundant and useless seismic attributes. Geophysics 72 (3), p P33–P38. DOI: 10.1190/1.2370420.
Emsley, D. Know your processing flow. In: Hall & Bianco, eds, 52 Things You Should Know About Geophysics. Agile Libre, 2012.
Hall, M, B Roy, and P Anno (2006). Assessing the success of pre-stack inversion in a heavy oil reservoir: Lower Cretaceous McMurray Formation at Surmont. Canadian Society of Exploration Geophysicists National Convention, Calgary, Canada, May 2006.

The image of the training error plot — showing predicted logs in red against input logs — is from Hampson–Russell's excellent EMERGE software. I'm claiming the use of the copyrighted image is fair use.

Tuesday
Nov062012

## Resolution, anisotropy, and brains

Day 1 of the SEG Annual Meeting continued with the start of the regular program — 96 talks and 71 posters, not to mention the 323 booths on the exhibition floor. Instead of deciding where to start, I wandered around the bookstore and bought Don Herron's nice-looking new book, First Steps in Seismic Interpretation, which we will review some time soon.

Here are my highlights from the rest of the day.

### Chuck Ursenbach, Arcis

Calgary is the home of seismic geophysics. There's a deep tradition of signal processing, and getting the basics right. Sometimes there's snake oil too, but mostly it's good, honest science. And mathematics. So when Jim Gaiser suggested last year at SEG that PS data might offer as good resolution as SS or PP — as good, and possibly better — you know someone in Calgary will jump on it with MATLAB. Ursenbach, Cary, and Perz [PDF] did some jumping, and conclude: PP-to-PS mapping can indeed increase bandwidth, but the resolution is unchanged, because the wavelength is unchanged — 'conservation of resolution', as Ursenbach put it. Resolution isn't everything.

### Gabriel Chao, Total E&P

Chao showed a real-world case study starting with a PreSTM gather with a decent Class 2p AVO anomaly at the top of the reservoir interval (TTI Kirchhoff with 450–4350 m offset). There was residual NMO in the gather, as Leon Thomsen himself later forced Chao to admit, but there did seem to be a phase reversal at about 25°. The authors compared the gather with three synthetics: isotropic convolutional, anisotropic convolutional, and full waveform. The isotropic model was fair, but the phase reversal was out at 33°. The anisotropic convolutional model matched well right up to about 42°, beyond which only the full waveform model was close (right). Anisotropy made a similar difference to wavelet extraction, especially beyond about 25°.

With no hockey to divert them, Canadians are focusing on geophysical contests this year. With the Canadian champions Keneth Silva and Abdolnaser Yousetz Zadeh denied the chance to go for the world title by circumstances beyond their control, Canada fielded a scratch team of Adrian Smith (U of C) and Darragh O'Connor (Dalhousie). So much depth is there in the boreal Americas that the pair stormed home with the trophy, the cash, and the glory.

The Challenge Bowl event was a delight — live music, semi-raucous cheering, and who can resist MC Peter Duncan's cheesy jests? If you weren't there, promise yourself you'll go next year.

The image from Chao is copyright of SEG, from the 2012 Annual Meeting proceedings, and used here in accordance with their permissions guidelines. The image of Herron's book is also copyright of SEG; its use here is proposed to be fair use.

Wednesday
Oct242012

## N is for Nyquist

In yesterday's post, I covered a few ideas from Fourier analysis for synthesizing and processing information. It serves as a primer for the next letter in our A to Z blog series: N is for Nyquist.

In seismology, the goal is to propagate a broadband impulse into the subsurface, and measure the reflected wavetrain that returns from the series of rock boundaries. A question that concerns the seismic experiment is: What sample rate should I choose to adequately capture the information from all the sinusoids that comprise the waveform? Sampling is the capturing of discrete data points from the continuous analog signal — a necessary step in recording digital data. Oversample it, using too high a sample rate, and you might run out of disk space. Undersample it and your recording will suffer from aliasing.

### What is aliasing?

Alaising is a phenomenon observed when the sample interval is not sufficiently brief to capture the higher range of frequencies in a signal. In order to avoid aliasing, each constituent frequency has to be sampled at least two times per wavelength. So the term Nyquist frequency is defined as half of the sampling frequency of a digital recording system. Nyquist has to be higher than all of the frequencies in the observed signal to allow perfect recontstruction of the signal from the samples.

Above Nyquist, the signal frequencies are not sampled twice per wavelength, and will experience a folding about Nyquist to low frequencies. So not obeying Nyquist gives a double blow, not only does it fail to record all the frequencies, the frequencies that you leave out actually destroy part of the frequencies you do record. Can you see this happening in the seismic reflection trace shown below? You may need to traverse back and forth between the time domain and frequency domain representation of this signal.

Seismic data is usually acquired with either a 4 millisecond sample interval (250 Hz sample rate) if you are offshore, or 2 millisecond sample interval (500 Hz) if you are on land. A recording system with a 250 Hz sample rate has a Nyquist frequency of 125 Hz. So information coming in above 150 Hz will wrap around or fold to 100 Hz, and so on.

It's important to note that the sampling rate of the recording system has nothing to do the native frequencies being observed. It turns out that most seismic acquisition systems are safe with Nyquist at 125 Hz, because seismic sources such as Vibroseis and dynamite don't send high frequencies very far; the earth filters and attenuates them out before they arrive at the receiver.

### Space alias

Aliasing can happen in space, as well as in time. When the pixels in this image are larger than half the width of the bricks, we see these beautiful curved artifacts. In this case, the aliasing patterns are created by the very subtle perspective warping of the curved bricks across a regularly sampled grid of pixels. It creates a powerful illusion, a wonderful distortion of reality. The observations were not sampled at a high enough rate to adequately capture the nature of reality. Watch for this kind of thing on seismic records and sections. Spatial alaising.

Click for the full demonstration (or adjust your screen resolution).You may also have seen this dizzying illusion of an accelerating wheel that suddenly appears to change direction after it rotates faster than the sample rate of the video frames captured. The classic example is the wagon whel effect in old Western movies.

Aliasing is just one phenomenon to worry about when transmitting and processing geophysical signals. After-the-fact tricks like anti-aliasing filters are sometimes employed, but if you really care about recovering all the information that the earth is spitting out at you, you probably need to oversample. At least two times for the shortest wavelengths.

Thursday
Aug162012

## The power of stack

Multiplicity is a basic principle of seismic acquisition. Our goal is to acquite lots of traces—lots of spatial samples—with plenty of redundancy. We can then exploit the redundancy, by mixing traces, sacrificing some spatial resolution for increased signal:noise. When we add two traces, the repeatable signal adds constructively, reinforcing and clarifying. The noise, on the other hand, is spread evenly about zero and close to random, and tends to cancel itself. This is why you sometimes hear geophysicists refer to 'the power of stack'.

Here's an example. There are 20 'traces' of 100-digit-long sequences of random numbers (white noise). The numbers range between –1 and +1. I added some signal to samples 20, 40, 60 and 80. The signals have amplitude 0.25, 0.5, 0.75, and 1. You can't see them in the traces, because these tiny amplitudes are completely hidden by noise. The stacked trace on the right is the sum of the 20 noisy traces. We see mostly noise, but the signal emerges. A signal of just 0.5—half the peak amplitude of the noise—is resolved by this stack of 20 traces; the 0.75 signal stands out beautifully.

Here's another example, but with real data. This is part of Figure 3 from Liu, G, S Fomel, L Jin, and X Chen (2009). Stacking seismic data using local correlation. Geophysics 74 (2) V43–V48. On the left is an NMO-corrected (flattened) common mid-point gather from a 2D synthetic model with Gaussian noise added. These 12 traces each came from a single receiver, though in this synthetic case the receiver was a virtual one. Now we can add the 12 traces to get a single trace, which has much stronger signal, relative to the background noise, than any of the input traces. This is the power of stack. In the paper, Liu et al. improve on the simple sum by weighting the traces adaptively. Click to enlarge.

The number of traces available for the stack is called fold. The examples above have folds of 20 and 12. Geophysicists like fold. Fold works. Let's look at another example.

Above, I've made a single digit 1 with 1% opacity — it's almost invisible. If I stack two 2s, with a little random jitter, the situation is still desperate. When I have five digits, I can at least see the hidden image with some fidelity. However, if I add random noise to the image, a fold of 5 is no longer enough. I need at least 10, and ideally more like 20 images stacked up to see any signal. So it is for seismic data: to see through the noise, we need fold.

Now you know a bit about why we want more traces from the field, next time I'll look at how much those traces cost, and how to figure out how many you need.

Thank you to Stuart Mitchell of Calgary for the awesome analogy for seismic fold.

Tuesday
Aug142012

## Great geophysicists #4: Fermat

This Friday is Pierre de Fermat's 411th birthday. The great mathematician was born on 17 August 1601 in Beaumont-de-Lomagne, France, and died on 12 January 1665 in Castres, at the age of 63. While not a geophysicist sensu stricto, Fermat made a vast number of important discoveries that we use every day, including the principle of least time, and the foundations of probability theory.

Fermat built on Heron of Alexandria's idea that light takes the shortest path, proposing instead that light takes the path of least time. These ideas might seem equivalent, but think about anisotropic and inhomogenous media. Fermat continued by deriving Snell's law. Let's see how that works.

We start by computing the time taken along a path:

$\fn_cm t = \frac{\sqrt{a^2 + x^2}}{v_1} + \frac{\sqrt{b^2 + y^2}}{v_2}$

Then we differentiate with respect to space. This effectively gives us the slope of the graph of time vs distance.

$\fn_cm \frac{\mathrm{d}t}{\mathrm{d}x} = \frac{x}{v_1 \sqrt{a^2 + x^2}} - \frac{y}{v_2 \sqrt{b^2 + y^2}}$

We want to minimize the time taken, which happens at the minimum on the time vs distance graph. At the minimum, the derivative is zero. The result is instantly recognizable as Snell's law:

$\fn_cm 0 = \frac{\sin\theta_1}{v_1} - \frac{\sin\theta_2}{v_2}$

### Maupertuis's generalization

The principle is a core component of the principle of least action in classical mechanics, first proposed by Pierre Louis Maupertuis (1698–1759), another Frenchman. Indeed, it was Fermat's handling of Snell's law that Maupertuis objected to: he didn't like Fermat giving preference to least time over least distance.

Maupertuis's generalization of Fermat's principle was an important step. By the application of the calculus of variations, one can derive the equations of motion for any system. These are the equations at the heart of Newton's laws and Hooke's law, which underlie all of the physics of the seismic experiment. So, you know, quite useful.

### Probably very clever

It's so hard to appreciate fundamental discoveries in hindsight. Together with Blaise Pascal, he solved basic problems in practical gambling that seem quite straightforward today. For example, Antoine Gombaud, the Chevalier de Méré, asked Pascal: why is it a good idea to bet on getting a 1 in four dice rolls, but not on a double-1 in twenty-four? But at the time, when no-one had thought about analysing problems in terms of permutations and combinations before, the solutions were revolutionary. And profitable.

For setting Snell's law on a firm theoretical footing, and introducing probability into the world, we say Pierre de Fermat (pictured here) is indeed a father of geophysics.

Page 1 ... 2 3 4 5 6 ... 16