Attention Mechanism for Multivariate Time Series Recurrent Model Interpretability Applied to the Ironmaking Industry

15 Jul 2020  ·  Cedric Schockaert, Reinhard Leperlier, Assaad Moawad ·

Data-driven model interpretability is a requirement to gain the acceptance of process engineers to rely on the prediction of a data-driven model to regulate industrial processes in the ironmaking industry. In the research presented in this paper, we focus on the development of an interpretable multivariate time series forecasting deep learning architecture for the temperature of the hot metal produced by a blast furnace. A Long Short-Term Memory (LSTM) based architecture enhanced with attention mechanism and guided backpropagation is proposed to accommodate the prediction with a local temporal interpretability for each input. Results are showing high potential for this architecture applied to blast furnace data and providing interpretability correctly reflecting the true complex variables relations dictated by the inherent blast furnace process, and with reduced prediction error compared to a recurrent-based deep learning architecture.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods