Abstract
Machine learned tasks on seismic data are often trained sequentially and separately, even though they utilize the same features (i.e., geometrical) of the data. We present StorSeismic as a dataset-centric framework for seismic data processing, which consists of neural network (NN) pretraining and fine-tuning procedures. We, specifically, utilize a NN as a preprocessing tool to extract and store seismic data features of a particular dataset for any downstream tasks. After pretraining, the resulting model can be utilized later, through a fine-tuning procedure, to perform different tasks using limited additional training. Used often in natural language processing (NLP) and lately in vision tasks, bidirectional encoder representations from transformer (BERT), a form of a transformer model, provides an optimal platform for this framework. The attention mechanism of BERT, applied here on a sequence of traces within the shot gather, is able to capture and store key geometrical features of the seismic data. We pretrain StorSeismic on field data, along with synthetically generated ones, in the self-supervised step. Then, we use the labeled synthetic data to fine-tune the pretrained network in a supervised fashion to perform various seismic processing tasks, such as denoising, velocity estimation, first arrival picking, and normal moveout (NMO). Finally, the fine-tuned model is used to obtain satisfactory inference results on the field data.
Original language | English (US) |
---|---|
Article number | 5921915 |
Journal | IEEE Transactions on Geoscience and Remote Sensing |
Volume | 60 |
DOIs | |
State | Published - 2022 |
Bibliographical note
Publisher Copyright:© 1980-2012 IEEE.
Keywords
- Inversion
- machine learning (ML)
- seismic processing
- self-supervised learning
- transformer
ASJC Scopus subject areas
- Electrical and Electronic Engineering
- General Earth and Planetary Sciences