News
« Where on Google Earth #266 | Main | Where on Google Earth #265 »
Friday
Feb042011

B is for bit depth

If you give two bits about quantitative seismic interpretation, amplitude maps, inversion, or AVO, then you need to know a bit about bits.

When seismic data is recorded, four bytes are used to store the amplitude values. A byte contains 8 bits, so four of them means 32 bits for every seismic sample, or a bit-depth of 32. As Evan explained recently, amplitude values themselves don’t mean much. But we want to use 32 bits because, at least at the field recording stage, when a day might cost hundreds of thousands of dollars, we want to capture every nuance of the seismic wavefield, including noise, multiples, reverberations, and hopefully even some signal. We have time during processing to sort it all out.

First, it’s important to understand that I am not talking about spatial or vertical resolution, what we might think of as detail. That’s a separate problem which we can understand by considering a pixelated image. It has poor resolution: it is spatially under-sampled. Here is the same image at two different resolutions. The one on the left is 300 × 240 pixels; on the right, 80 × 64 pixels (but reproduced at the same size as the other picture, so the pixels are larger).

But now consider an image with plenty of detail but no precision. On the left is an 8-bit image with 300 × 240 pixels (just like the left-hand image above, but without the colour). On the right is the same image at the same resolution, 300 × 240 pixels, but with only two bits per sample instead of eight.

Notice that the edges are spatially smooth, not blocky like the pixelated example. The deficiency this time is in the subtlety of the pixel values: though smooth in a spatial sense, the image is not smooth in colour space. We can see this from its histogram (inset). It only contains four values: black, dark grey, light grey, and white. Each of the two bits that represent each pixel can have a value of 0 or 1, giving 4 combinations altogether: 00, 01, 10, and 11. More bits gives more possible values:

Clearly, using fewer bits means that your files will be smaller: reducing the bit depth from 32 to 8 means only one byte is used where there were four, so files will be reduced in size by 75%. The smaller file will load faster, take up less space in memory, and make your disk space go further.

All of this applies to your seismic data. Data is almost always delivered from the processor as 32-bit. In the past, people were more concerned about disk space than today, so often reduced the bit depth in their volumes to 8-bit. This means the amplitude values can take 256 discrete sample values. If you believe this is enough precision, and many people think that for interpretation it is, then you can safely use 8-bit volumes for your work. Indeed, some tools, such as Landmark's GeoProbe volume interpretation tool, can only load 8-bit data.

Click to enlarge; It is noteworthy in itself that the effect is quite hard to see at this small scale

But for seismic analysis, especially amplitude mapping, spectral decomposition, or pre-stack work, 8-bit data is probably not precise enough. Opinions vary, but I usually keep my 32-bit volume on disk, but make all derivative volumes and attribute volumes 16-bit. I think 65 536 values is enough, and because of noise and other uncertainties in the data, any precision beyond that is spurious.

Next time, we'll look at what happens when you increase or decrease the bit-dpeth of your data, and the perils of clipping.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

Matt, what bit depth do you receive your data in, or is it even analog (I have no idea how that would work; I'm picturing a vinyl versus CD situation)? In my experience (with audio and images) you introduce 'false data' when you up-sample, so your upper bit depth limit for workable data would be defined by the bit depth that you first receive the data in.

Also, is there a way to dynamically work with different bit depths for your data? For example, lets say you had a 64-bit dataset. Maybe there are some operations that don't need that much information, or a program that only uses 8-bit, so you can create a down-sampled copy of the data for those operations (leaving the original un-touched). There may then be other operations that are most efficient at 16-bit, so you could create a second subset, and so on. This wouldn't be very space friendly, but these days terabytes are cheap; it's processing time that costs the most. So, if you could optimize the data for your operations you may get a bit more efficiency.

It sounds like you do something like this already, but I wonder if there is a way to finesse the process to get extra or more useful information out of it. Using your black and white samples above, notice how in the 2-bit image you lose a lot of fine detail, but due to the increase in contrast many individual objects stand out much more than they do in the 8-bit image.

For example, if I were looking for coniferous trees in that scene, in the 8-bit image the trees blend at the edges and have varied tones, making automated detection difficult, but in the 2-bit version the trees stand out like a black blob in a sea of grey.

Lets say now you were looking for the trunks of the coniferous trees. In that case the 2-bit data is useless as that data is gone, but the 8-bit data is extraordinarily challenging to scan through for this tiny feature, especially given the number of adjacent deciduous trees. With a bit of ingeunity you could use the 2-bit image to quickly find all the coniferous tree areas (or at least potential areas; it might be hard to tell the difference between a roof and a tree at this depth). Once those areas are identified and mapped out, you could overlay that data onto the 8-bit image to analysis the information further (identify what is actually a tree, use the increased contrast to find the trunk).

This is a rudimentary example, but I think if you were looking for say oil the process could be much the same - use low res data to find areas of contrast, then analyse those areas only in the high-res data.

Maybe this is already standard pratice too - but it's just what your post got me thinking about!

February 4, 2011 | Unregistered CommenterReid

@Reid: Thank you for the thoughtful, and thought-provoking comment.

I hardly know where to start... One at a time I guess:

- You're spot on; I should have highlighted the fact that once you've resampled to a lower bit-depth, you can't go back. So it's a good idea to keep the delivered data near at hand; it's almost always 32-bit.

- I love the idea of adaptive bit depth. The problem of large data sets has been tackled in another way by some software. For instance, Landmark has a 'bricked' data format that splits the volume into small blocks, each of which has a local scale factor stored with it. This way the full dynamic range allowed by the bit depth (8- or 16-bit) can be used in each block. It's analogous to JPEG image compression; I might do a post about it some time.

- Your analysis of the trees on the low-precision image is just great. This process is called image segmentation in robot vision and image analysis. It's a fascinating area; I only wish my math were up to it! Thresholding---reducing an image to 1-bit samples---is a common technique; so the 2-bit example is almost there. I like your observation that one may want to use a sort of stepwise thresholding: reduce detail, then add back detail, then reduce, then add back, etc. Very interesting. Another approach is edge-enhancing filtering, e.g. with the Kuwahara and Retinex filters.

February 4, 2011 | Registered CommenterMatt Hall

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>