Efficient Seismic Facies Classification Using Transformer-based Masked Autoencoders

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Unquestionably, oil and gas exploration is often hindered by the time-consuming and costly processing and interpretation of seismic data. Moreover, the massive amounts of seismic data deem the manual interpretation process practically impossible. Thus, researchers typically resort to automatic algorithms (such as using machine learning models) to aid the manual interpretation of the full seismic data. Nonetheless, many of these algorithms demand considerable labeled data to build a model capable of achieving satisfactory performance. Alternatively, we propose a self-supervised framework to alleviate the need for large amounts of labeled data. Our approach is based on masked autoencoders using vision transformers. Similar to attributes traditionally extracted from seismic data for use in seismic interpretation, masked autoencoders can learn valuable representations of seismic data that can aid in the labeling process. A 3D marine seismic survey from the New Zealand government, called Parihaka, is used to validate our proposed procedure for the task of facies classification. This initial analysis shows that comparable or better performance can be obtained using a small portion of labeled data. Particularly, our method achieved a 78% accuracy on the untrained portion of the data using 20% of the labels, while a fully-supervised approach reached a 61% accuracy.
Original languageEnglish (US)
Title of host publication84th EAGE Annual Conference & Exhibition
PublisherEuropean Association of Geoscientists & Engineers
StatePublished - 2023

Bibliographical note

KAUST Repository Item: Exported on 2023-05-29
Acknowledgements: The authors thank KAUST and the DeepWave Consortium sponsors for supporting this research. Also, the authors thank Yuanyuan Li and Fu Wang for all the insightful discussions.


Dive into the research topics of 'Efficient Seismic Facies Classification Using Transformer-based Masked Autoencoders'. Together they form a unique fingerprint.

Cite this