Abstract
Several seismic applications benefit from using all available receivers and a long time-window, allowing greater representation of signal and noise. Neural networks have the ability to utilise spatio-temporal data and extract high level patterns thanks to their non-linear function compositions. However, the training of such networks is memory intensive, often resulting in the downsizing of data introducing constraints on the number of traces and/or the length of the recording. Through the example of developing a deep learning model for passive seismic event detection on a large array of ~3500 sensors, we describe an end-to-end workflow from synthetic labelled data creation to distributed model training to model deployment. We demonstrate how to overcome the memory challenges of large input data by utilizing TensorFlow’s data generators for on-the-fly generation and loading of large seismic recordings during the training procedure. Furthermore, we illustrate how training time can be drastically reduced by distributing training across multiple machines with GPU capability. Kubernetes and cloud resources are leveraged for ease of orchestration of compute resources and scaling up horizontally. Finally, we highlight that whilst training is computationally expensive, the trained model can be deployed on a standard, non-GPU machine for real-time detection of passive seismic events.
Original language | English (US) |
---|---|
DOIs | |
State | Published - 2020 |
Event | 1st EAGE Digitalization Conference and Exhibition - Vienna, Austria Duration: Nov 30 2020 → Dec 3 2020 |
Conference
Conference | 1st EAGE Digitalization Conference and Exhibition |
---|---|
Country/Territory | Austria |
City | Vienna |
Period | 11/30/20 → 12/3/20 |
Bibliographical note
Publisher Copyright:© EAGE 2019.
ASJC Scopus subject areas
- Computer Science Applications
- Software