Subscribe by email

No spam, total privacy, opt out any time
Connect
News
Blogroll

Thursday
May242012

## Thermogeophysics, whuh?

Earlier this month I spent an enlightening week in Colorado at a peer review meeting hosted by the US Department of Energy. Well-attended by about 300 people from organizations like Lawerence Livermore Labs, Berkeley, Stanford, Sandia National Labs, and *ahem* Agile, delegates heard about a wide range of cost-shared projects in the Geothermal Technologies Program. Approximately 170 projects were presented, representing a total US Department of Energy investment of \$340 million.

I was at the meeting because we've been working on some geothermal projects in California's Imperial Valley since last October. It's fascinating, energizing work. Challenging too, as 3D seismic is not a routine technology for geothermal, but it is emerging. What is clear is that geothermal exploration requires a range of technologies and knowledge. It pulls from all of the tools you could dream up; active seismic, passive seismic, magnetotellurics, resistivity, LiDAR, hyperspectral imaging, not to mention the borehole and drilling technologies. The industry has an incredible learning curve ahead of them if Enhanced Geothermal Systems (EGS) are going to be viable and scalable.

The highlights of the event for me were not the talks that I saw, but the people I met during coffee breaks:

John McLennan & Joseph Moore at the the University of Utah have done some amazing laboratory experiments on large blocks of granite. They constructed a "proppant sandwich", pumped fluid through it, and applied polyaxial stress to study geochemical and stress effects on fracture development and permeability pathways. Hydrothermal fluids alter the proppant and gave rise to wormhole-like collapse structures, similar to those in the CHOPS process. They incorporated diagnostic imaging (CT-scans, acoustic emission tomography, x-rays), with sophisticated numerical simulations. A sign that geothermal practitioners are working to keep science up to date with engineering.

Stephen Richards bumped into me in the corridor after lunch after he overheard me talking about the geospatial work that I did with the Nova Scotia Petroleum database. It wasn't five minutes that passed before he rolled up his sleeves, took over my laptop, and was hacking away. He connected the WMS extension that he built as part of the State Geothermal Data to QGIS on my machine, and showed me some of the common file formats and data interchange content models for curating geothermal data on a continental scale. The hard part isn't nessecarily the implementation, the hard part is curating the data. And it was a thrill to see it thrown together, in minutes, on my machine. A sign that there is a huge amount of work to be done around opening data.

Dan Getman - Geospatial Section lead at NREL gave a live demo of the fresh prospector interface he built that is accesible through OpenEI. I mentioned OpenEI briefly in the poster presentation that I gave in Golden last year, and I can't believe how much it has improved since then. Dan once again confirmed this notion that the implementation wasn't rocket science, (surely any geophysicist could figure it out), and in doing so renewed my motivation for extending the local petroleum database in my backyard. A sign that geospatial methods are at the core of exploration and discovery.

There was an undercurrent of openness surrounding this event. By and large, the US DOE is paying for half of the research, so full disclosure is practically one of the terms of service. Not surprisingly, it feels more like science going on here, where innovation is being subsidized and intentionally accelerated because there is a demand. Makes me think that activity is a nessecary but not sufficient metric for innovation.

Tuesday
May012012

## K is for Wavenumber

Wavenumber, sometimes called the propagation number, is in broad terms a measure of spatial scale. It can be thought of as a spatial analog to the temporal frequency, and is often called spatial frequency. It is often defined as the number of wavelengths per unit distance, or in terms of wavelength, λ:

$k = \frac{1}{\lambda}\$

The units are m–1, which are nameless in the International System, though cm–1 are called kaysers in the cgs system. The concept is analogous to frequency f, measured in s–1 or Hertz, which is the reciprocal of period T; that is, f = 1/T. In a sense, period can be thought of as a temporal 'wavelength'—the length of an oscillation in time.

If you've explored the applications of frequency in geophysics, you'll have noticed that we sometimes don't use ordinary frequency f, in Hertz. Because geophysics deals with oscillating waveforms, ones that vary around a central value (think of a wiggle trace of seismic data), we often use the angular frequency. This way we can also express the close relationship between frequency and phase, which is an angle. So in many geophysical applications, we want the angular wavenumber. It is expressed in radians per metre:

$k = \frac{2\pi}{\lambda}\$

The relationship between angular wavenumber and angular frequency is analogous to that between wavelength and ordinary frequency—they are related by the velocity V:

$k = \frac{\omega}{V}\$

It's unfortunate that there are two definitions of wavenumber. Some people reserve the term spatial frequency for the ordinary wavenumber, or use ν (that's a Greek nu, not a vee — another potential source of confusion!), or even σ for it. But just as many call it the wavenumber and use k, so the only sure way through the jargon is to specify what you mean by the terms you use. As usual!

Just as for temporal frequency, the portal to wavenumber is the Fourier transform, computed along each spatial axis. Here are two images and their 2D spectra — a photo of some ripples, a binary image of some particles, and their fast Fourier transforms. Notice how the more organized image has a more organized spectrum (as well as some artifacts from post-processing on the image), while the noisy image's spectrum is nearly 'white':

Explore our other posts about scale.

The particle image is from the sample images in FIJI. The FFTs were produced in FIJI.

Thursday
Apr052012

## Polarity cartoons

...it is good practice to state polarity explicitly in discussions of seismic data, with some phrase such as: 'increase in impedance downwards gives rise to a red (or blue, or black) loop.'
Bacon, Simm & Redshaw (2003), 3D Seismic Interpretation, Cambridge

Good practice, but what a mouthful. And perhaps because it is such a cumbersome convention, it is often ignored, assumed, or skipped. We'd like to change that. Polarity is important, everyone needs to know what the wiggles (or colours) of seismic data mean.

### Two important things

Click the image to find your favorite colorbarSeismic data is about contrasts. The data are an abstraction of geological contrasts in the earth. To connect the data to the geology, there are two important things you need to know about your data:

1. What do the colours mean in terms of digits?
2. What do the digits mean in terms of impedance?

So whenever you show seismic to someone, you need to answers these questions for them. Show the colourbar, and the sign of the digits (the magnitude of the digits is not very important; amplitude are relative). Show the relationship between the sign of the digits and impedance.

### Really useful

To help you show these things, we have created a table of polarity cartoons for some common colour scales.

1. Decide if you want to use the American–Canadian convention of a downwards increase in impedance resulting in a positive amplitude, or the opposite European–Australian convention. Sometimes people talk about SEG Normal polarity — the de facto SEG standard is the American convention.
2. Choose whether you want to show a high impedance layer sandwiched between low impedance ones, or vice versa. To make this decision, inspect your well ties or plot impedance against lithology. For example, if your sands are relatively fast and dense, you may want to choose the hard layer option.
3. Select a colourmap that matches your displays. If you need another, you can download and edit the SVG file, or email us and we'll add it for you.
4. Right-click on a thumbnail, copy it to your clipboard, and paste it into the corner of your section or timeslice in PowerPoint, Word, or wherever. If the thumbnail is too small or pixelated, click the thumbnail for higher resolution.

With so many options to choose from, we hope this little tool can help make your seismic discussions a little more transparent. What's more, if you see a seismic section without a legend like this, then are you sure the presenter knows about the polarity of their data? Perhaps they do, but it is an oversight to assume that you should know as well.

What do you make your audience assume?

Tuesday
Apr032012

## Source rocks from seismic

A couple of years ago, Statoil's head of exploration research, Ole Martinsen, told AAPG Explorer magazine about a new seismic analysis method. Not just another way to discriminate between sand and shale, or water and gas, this was a way to assess source rock potential. Very useful in under-explored basins, and Statoil developed it for that purpose, but only the very last sentence of the Explorer article hints at its real utility today: shale gas exploration.

Calling the method Source Rocks from Seismic, Martinsen was cagey about details, but the article made it clear that it's not rocket surgery: “We’re using technology that would normally be used, say, to predict sandstone and fluid content in sandstone,” said Marita Gading, a Statoil researcher. Last October Helge Løseth, along with Gading and others, published a complete account of the method (Løseth et al, 2011).

Because they are actively generating hydrocarbons, source rocks are usually overpressured. Geophysicists have used this fact to explore for overpressured zones and even shale before. For example, Mukerji et al (2002) outlined the rock physics basis for low velocities in overpressured zones. Applying the physics to shales, Liu et al (2007) suggested a three-step process for evaluating source rock potential in new basins: 1 Sequence stratigraphic interpretation; 2 Seismic velocity analysis to determine source rock thickness; 3 Source rock maturity prediction from seismic. Their method is also a little hazy, but the point is that people are looking for ways to get at source rock potential via seismic data.

The Løseth et al article was exciting to see because it was the first explanation of the method that Statoil had offered. This was exciting enough that the publication was even covered by Greenwire, by Paul Voosen (@voooos on Twitter). It turns out to be fairly straightforward: acoustic impedance (AI) is inversely and non-linearly correlated with total organic carbon (TOC) in shales, though the relationship is rather noisy in the paper's examples (Kimmeridge Clay and Hekkingen Shale). This means that an AI inversion can be transformed to TOC, if the local relationship is known—local calibration is a must. This is similar to how companies estimate bitumen potential in the Athabasca oil sands (e.g. Dumitrescu 2009).

Figure 6 from Løseth et al (2011). A Seismic section. B Acoustic impedance. C Inverted seismic section where source rock interval is converted to total organic carbon (TOC) percent. Seismically derived TOC percent values in source rock intervals can be imported to basin modeling software to evaluate hydrocarbon generation potential of a basin. Click for full size..The result is that thick rich source rocks tend to have strong negative amplitude at the top, at least in subsiding mud-rich basins like the North Sea and the Gulf of Mexico. Of course, amplitudes also depend on stratigraphy, tuning, and so on. The authors expect amplitudes to dim with offset, because of elastic and anisotropic effects, giving a Class 4 AVO response.

This is a nice piece of work and should find application worldwide. There's a twist though: if you're interested in trying it out yourself, you might be interested to know that it is patent-pending:

WO/2011/026996
INVENTORS:  Løseth,  H;  Wensaas, L; Gading, M; Duffaut, K; Springer, HM
Method of assessing hydrocarbon source rock candidate
A method of assessing a hydrocarbon source rock candidate uses seismic data for a region of the Earth. The data are analysed to determine the presence, thickness and lateral extent of candidate source rock based on the knowledge of the seismic behaviour of hydrocarbon source rocks. An estimate is provided of the organic content of the candidate source rock from acoustic impedance. An estimate of the hydrocarbon generation potential of the candidate source rock is then provided from the thickness and lateral extent of the candidate source rock and from the estimate of the organic content.

References

Dumitrescu, C (2009). Case study of a heavy oil reservoir interpretation using Vp/Vs ratio and other seismic attributes. Proceedings of SEG Annual Meeting, Houston. Abstract is online

Liu, Z, M Chang, Y Zhang, Y Li, and H Shen (2007). Method of early prediction on source rocks in basins with low exploration activity. Earth Science Frontiers 14 (4), p 159–167. DOI 10.1016/S1872-5791(07)60031-1

Løseth, H, L Wensaas, M Gading, K Duffaut, and M Springer (2011). Can hydrocarbon source rocks be identified on seismic data? Geology 39 (12) p 1167–1170. First published online 21 October 2011. DOI 10.1130/​G32328.1

Mukerji, T, Dutta, M Prasad, J Dvorkin (2002). Seismic detection and estimation of overpressures. CSEG Recorder, September 2002. Part 1 and Part 2 (Dutta et al, same issue).

The figure is reproduced from Løseth et al (2011) according to The Geological Society of America's fair use guidelines. Thank you GSA! The flaming Kimmeridge Clay photograph is public domain.

Friday
Mar232012

## The spectrum of the spectrum

A few weeks ago, I wrote about the notches we see in the spectrums of thin beds, and how they lead to the mysterious quefrency domain. Today I want to delve a bit deeper, borrowing from an article I wrote in 2006.

### Why the funny name?

During the Cold War, the United States government was quite concerned with knowing when and where nuclear tests were happening. One method they used was seismic monitoring. To discriminate between detonations and earthquakes, a group of mathematicians from Bell Labs proposed detecting and timing echoes in the seismic recordings. These echoes gave rise to periodic but cryptic notches in the spectrum, the spacing of which was inversely proportional to the timing of the echoes. This is exactly analogous to the seismic response of a thin-bed.

To measure notch spacing, Bogert, Healy and Tukey (1963) invented the cepstrum (an anagram of spectrum and therefore usually pronounced kepstrum). The cepstrum is defined as the Fourier transform of the natural logarithm of the Fourier transform of the signal: in essence, the spectrum of the spectrum. To distinguish this new domain from time, to which is it dimensionally equivalent, they coined several new terms. For example, frequency is transformed to quefrency, phase to saphe, filtering to liftering, even analysis to alanysis.

Today, cepstral analysis is employed extensively in linguistic analysis, especially in connection with voice synthesis. This is because, as I wrote about last time, voiced human speech (consisting of vowel-type sounds that use the vocal chords) has a very different time–frequency signature from unvoiced speech; the difference is easy to quantify with the cepstrum.

### What is the cepstrum?

To describe the key properties of the cepstrum, we must look at two fundamental consequences of Fourier theory:

1. convolution in time is equivalent to multiplication in frequency
2. the spectrum of an echo contains periodic peaks and notches

Let us look at these in turn. A noise-free seismic trace s can be represented in the time t domain by the convolution of a wavelet w and reflectivity series r thus

$s(t) = w(t) \ast r(t)$

Then, in the frequency f domain

$S(f) = W(f) \times R(f)$

In other words, convolution in time becomes multiplication in frequency. The cepstrum is defined as the Fourier transform of the log of the spectrum. Thus, taking logs of the complex moduli

$\ln|S| = \ln|W| + \ln|R|$

Since the Fourier transform F is a linear operation, the cepstrum is

$\mathcal{F}(\ln|S|) = \mathcal{F}(\ln|W|) + \mathcal{F}(\ln|R|)$

We can see that the spectrums of the wavelet and reflectivity series are additively combined in the cepstrum. I have tried to show this relationship graphically below. The rows are domains. The columns are the components w, r, and s. Clearly, these thin beds are resolved by this wavelet, but they might not be in the presence of low frequencies and noise. Spectral and cepstral analysis—and alanysis—can help us cut through the seismic and get at the geology.

Time series (top), spectra (middle), and cepstra (bottom) for a wavelet (left), a reflectivity series containing three 10-ms thin-beds (middle), and the corresponding synthetic trace (right). The band-limited wavelet has a featureless cepstrum, whereas the reflectivity series clearly shows two sets of harmonic peaks, corresponding to the thin- beds (each 10 ms thick) and the thicker composite package.

References

Bogert, B, Healy, M and Tukey, J (1963). The quefrency alanysis of time series for echoes: cepstrum, pseudo-autocovariance, cross- cepstrum, and saphe-cracking. Proceedings of the Symposium on Time Series Analysis, Wiley, 1963.

Hall, M (2006). Predicting stratigraphy with cepstral decomposition. The Leading Edge 25 (2), February 2006 (Special issue on spectral decomposition). doi:10.1190/1.2172313

Greenhouse George image is public domain.

Page 1 ... 2 3 4 5 6 ... 14