News
Friday
Nov162012

You own your brain

I met someone last week who said her employer — a large integrated oil & gas company — 'owned her'. She said she'd signed an employment agreement that unequivocally spelt this out. This person was certainly a professional on paper, with a graduate degree and plenty of experience. But the company had, perhaps unwittingly, robbed her of her professional independence and self-determination. What a thing to lose.

Agreements like this erode our profession. Do not sign agreements like this. 

The idea that a corporation can own a person is obviously ludicrous — I'm certain she didn't mean it literally. But I think lots of people feel confined by their employment. For some reason, it's acceptable to gossip and whisper over coffee, but talking in any public way about our work is uncomfortable for some people. This needs to change.

Your employer owns your products. They pay you for concerted effort on things they need, and to have their socks knocked off occasionally. But they don't own your creativity, judgment, insight, and ideas — the things that make you a professional. They own their data, and their tools, and their processes, but they don't own the people or the intellects that created them. And they can't — or shouldn't be able to — stop you from going out into the world and being an active, engaged professional, free to exerise and discuss our science with whomever you like.

If you're asked to sign something saying you can't talk at meetings, write about your work, or contribute to open projects like SEGwiki — stop.

These contracts only exist because people sign them. Just say, 'No. I am a professional. I own my brain.'

Wednesday
Nov142012

Touring vs tunnel vision

My experience with software started, and still largely sits, at the user end. More often than not, interacting with another's design. One thing I have learned from the user experience is that truly great interfaces are engineered to stay out of the way. The interface is only a skin atop the real work that software does underneath — taking inputs, applying operations, producing outputs. I'd say most users of computers don't know how to compute without an interface. I'm trying to break free from that camp. 

In The dangers of default disdain, I wrote about the power and control that the technology designer has over his users. A kind of tunnel is imposed that restricts the choices for interacting with data. And for me, maybe for you as well, the tunnel has been a welcome structure, directing my focus towards that distant point; the narrow aperture invokes at least some forward motion. I've unknowingly embraced the tunnel vision as a means of interacting without substantial choices, without risk, without wavering digressions. I think it's fair to say that without this tunnel, most travellers would find themselves stuck, incapacitated by the hard graft of touring over or around the mountain.

Tour guides instead of tunnels

But there is nothing to do inside the tunnel, no scenery to observe, just a black void between input and output. For some tasks, taking the tunnel is the only obvious and economic choice — all you want is to get stuff done. But choosing the tunnel means you will be missing things along the way. It's a trade off.

For getting from A to B, there are engineers to build tunnels, there are travellers to travel the tunnels, and there is a third kind of person altogether: tour guides take the scenic route. Building your own tunnel is a grand task, only worthwhile if you can find enough passengers to use it. The scenic route isn't just a casual lackadaisical approach. It's necessary for understanding the landscape; by taking it the traveler becomes connected with the territory. The challenge for software and technology companies is to expose people to the richness of their environment while moving them through at an acceptable pace. Is it possible to have a tunnel with windows?

Oil and gas operating companies are good at purchasing the tunnel access pass, but are not very good at building a robust set of tools to navigate the landscape of their data environment. After all, that is the thing that we travellers need to be in constant contact with. Touring or tunneling? The two approaches may or may not arrive at the same destination and they have different costs along the way, making it different business.

Thursday
Nov082012

Segmentation and decomposition

Day 4 of the SEG Annual Meeting in Las Vegas was a game of two halves: talks in the morning and workshops in the afternoon. I caught two signal processing talks, two image processing talks, and two automatic interpretation talks, then spent the afternoon in a new kind of workshop for students. My highlights:

Anne Solberg, DSB, University of Oslo

Evan and I have been thinking about image segmentation recently, so I'm drawn to those talks (remember Halpert on Day 2?). Angélique Berthelot et al. have been doing interesting work on salt body detection. Solberg (Berthelot's supervisor) showed some remarkable results. Their algorithm:

  1. Compute texture attributes, including Haralick and wavenumber textures (Solberg 2011)
  2. Supervised Bayesian classification (we've been using fuzzy c-means)
  3. 3D regularization and segmentation (okay, I got a bit lost at this point)

The results are excellent, echoing human interpretation well (right) — but having the advantage of being objective and repeatable. I was especially interested in the wavenumber textures, and think they'll help us in our geothermal work. 

Jiajun Han, BLISS, University of Alberta

The first talk of the day was that classic oil industry: a patented technique with an obscure relationship to theory. But Jiajun Han and Mirko van der Baan of the University of Alberta gave us the real deal — a special implementation of empirical mode decomposition, which is a way to analyse time scales (frequencies, essentially), without leaving the time domain. The result is a set of intrinsic mode functions (IMFs), a bit like Fourier components, from which Han extracts instantaneous frequency. It's a clever idea, and the results are impressive. Time–frequency displays usually show smearing in either the time or frequency domain, but Han's method pinpoints the signals precisely:

That's it from me for SEG — I fly home tomorrow. It's tempting to stay for the IQ Earth workshop tomorrow, but I miss my family, and I'm not sure I can crank out another post. If you were in Vegas and saw something amazing (at SEG I mean), please let us know in the comments below. If you weren't, I hope you've enjoyed these posts. Maybe we'll see you in Houston next year!

More posts from SEG 2012.

The images adapted from Berthelot and Han are from the 2012 Annual Meeting proceedings. They are copyright of SEG, and used here in accordance with their permissions guidelines.

Thursday
Nov082012

Brittleness and robovibes

Day 3 of the SEG Annual Meeting was just as rammed with geophysics as the previous two days. I missed this morning's technical program, however, as I've taken on the chairpersonship (if that's a word) of the SEG Online Committee. So I had fun today getting to grips with that business. Aside: if you have opinion's about SEG's online presence, please feel free to send them my way.

Here are my highlights from the rest of the day — both were footnotes in their respective talks:

Brittleness — Lev Vernick, Marathon

Evan and I have had a What is brittleness? post in our Drafts folder for almost two years. We're skeptical of the prevailing view that a shale's brittleness is (a) a tangible rock property and (b) a function of Young's modulus and Poisson's ratio, as proposed by Rickman et al. 2008, SPE 115258. To hear such an intellect as Lev declare the same today convinced me that we need to finish that post — stay tuned for that. Bottom line: computing shale brittleness from elastic properties is not physically meaningful. We need to find more appropriate measures of frackability, which Lev pointed out is, generally speaking, inversely proportional to organic content. This poses a basic conflict for those exploiting shale plays. 

Robovibes — Guus Berkhout, TU Delft

At least 75% of Berkhout's talk went by me today, mostly over my head. I stopped writing notes, which I only do when I'm defeated. But once he'd got his blended source stuff out of the way, he went rogue and asked the following questions:

  1. Why do we combine all seismic frequencies into the device? Audio got over this years ago (right).
  2. Why do we put all the frequencies at the same location? Viz 7.1 surround sound.
  3. Why don't we try more crazy things in acquisition?

I've wondered the same thing myself — thinking more about the receiver side than the sources — after hearing about the brilliant sampling strategy the Square Kilometer Array is using at a PIMS Lunchbox Lecture once. But Berkhout didn't stop at just spreading a few low-frequency vibrators around the place. No, he wants robots. He wants an autonomous army of flying and/or floating narrow-band sources, each on its own grid, each with its own ghost matching, each with its own deblending code. This might be the cheapest million-channel acquisition system possible. Berkhout's aeronautical vibrator project starts in January. Seriously.

More posts from SEG 2012. 

Speaker image is licensed CC-BY-SA by Tobias Rütten, Wikipedia user Metoc.

Tuesday
Nov062012

Smoothing, unsmoothness, and stuff

Day 2 at the SEG Annual Meeting in Las Vegas continued with 191 talks and dozens more posters. People are rushing around all over the place — there are absolutely no breaks, other than lunch, so it's easy to get frazzled. Here are my highlights:

Adam Halpert, Stanford

Image segmentation is an important class of problems in computer vision. An application to seismic data is to automatically pick a contiguous cloud of voxels from the 3D seismic image — a salt body, perhaps. Before trying to do this, it is common to reduce noise (e.g. roughness and jitter) by smoothing the image. The trick is to do this without blurring geologically important edges. Halpert did the hard work and assessed a number of smoothers for both efficacy and efficiency: median (easy), Kuwahara, maximum homogeneity median, Hale's bilateral [PDF], and AlBinHassan's filter. You can read all about his research in his paper online [PDF]. 

Dave Hale, Colorado School of Mines

Automatic fault detection is a long-standing problem in interpretation. Methods tend to focus on optimizing a dissimilarity image of some kind (e.g. Bø 2012 and Dorn 2012), or on detecting planar discontinuities in that image. Hale's method is, I think, a new approach. And it seems to work well, finding fault planes and their throw (right).

Fear not, it's not complete automation — the method can't organize fault planes, interpret their meaning, or discriminate artifacts. But it is undoubtedly faster, more accurate, and more objective than a human. His test dataset is the F3 dataset from dGB's Open Seismic Repository. The shallow section, which resembles the famous polygonally faulted Eocene of the North Sea and elsewhere, contains point-up conical faults that no human would have picked. He is open to explanations of this geometry. 

Other good bits

John Etgen and Chandan Kumar of BP made a very useful tutorial poster about the differences and similarities between pre-stack time and depth migration. They busted some myths about PreSTM:

  • Time migration is actually not always more amplitude-friendly than depth migration.
  • Time migration does not necessarily produce less noisy images.
  • Time migration does not necessarily produce higher frequency images.
  • Time migration is not necessarily less sensitive to velocity errors.
  • Time migration images do not necessarily have time units.
  • Time migrations can use the wave equation.
  • But time migration is definitely less expensive than depth migration. That's not a myth.

Brian Frehner of Oklahoma State presented his research [PDF] to the Historical Preservation Committee, which I happened to be in this morning. Check out his interesting-looking book, Finding Oil: The Nature of Petroleum Geology

Jon Claerbout of Stanford gave his first talk in several years. I missed it unfortunately, but Sergey Fomel said it was his highlight of the day, and that's good enough for me. Jon is a big proponent of openness in geophysics, so no surprise that he put his talk on YouTube days ago:

The image from Hale is copyright of SEG, from the 2012 Annual Meeting proceedings, and used here in accordance with their permissions guidelines. The DOI links in this post don't work at the time of writing — SEG is on it.