DL Abstract

Talk 2: Interpreting the Black Box: Evaluating machine learning results in seismic analysis with SHAP

This presentation occurred on 11 November, 2025 at 12:00 PM

Machine learning algorithms have become increasingly valuable in seismic interpretation workflows, yet their "black box" nature often undermines interpreter confidence in the results. This talk demonstrates how SHAP (SHapley Additive exPlanations) methodology bridges this gap by providing transparent insights into complex machine learning models used in seismic facies analysis. Through diverse case studies, I illustrate how SHAP values reveal the relative influence of different seismic attributes on classification decisions across various geological settings. The methodology offers both local explanations for individual predictions and global insights into overall attribute importance, enabling interpreters to identify potential misclassifications and understand the driving factors behind different facies predictions. By integrating these techniques into their workflows, geoscientists can more effectively evaluate machine learning outputs, optimize attribute selection, and make more confident, informed decisions in subsurface characterization projects.

Log In to Submit Comment