2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Benchmarking Attention-Based Interpretability of Deep Learning in Multivariate Time Series Predictions

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The adaptation of deep learning models within safety-critical systems cannot rely only on good prediction performance but needs to provide interpretable and robust explanations for their decisions. When modeling complex sequences, attention mechanisms are regarded as the established approach to support deep neural networks with intrinsic interpretability. This paper focuses on the emerging trend of specifically designing diagnostic datasets for understanding the inner workings of attention mechanism based deep learning models for multivariate forecasting tasks. We design a novel benchmark of synthetically designed datasets with the transparent underlying generating process of multiple time series interactions with increasing complexity. The benchmark enables empirical evaluation of the performance of attention based deep neural networks in three different aspects: (i) prediction performance score, (ii) interpretability correctness, (iii) sensitivity analysis. Our analysis shows that although most models have satisfying and stable prediction performance results, they often fail to give correct interpretability. The only model with both a satisfying performance score and correct interpretability is IMV-LSTM, capturing both autocorrelations and crosscorrelations between multiple time series. Interestingly, while evaluating IMV-LSTM on simulated data from statistical and mechanistic models, the correctness of interpretability increases with more complex datasets.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: found
          • Article: not found

          Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

          Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Crystal Statistics. I. A Two-Dimensional Model with an Order-Disorder Transition

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Inferring causation from time series in Earth system sciences

              The heart of the scientific enterprise is a rational effort to understand the causes behind the phenomena we observe. In large-scale complex dynamical systems such as the Earth system, real experiments are rarely feasible. However, a rapidly increasing amount of observational and simulated data opens up the use of novel data-driven causal methods beyond the commonly adopted correlation techniques. Here, we give an overview of causal inference frameworks and identify promising generic application cases common in Earth system sciences and beyond. We discuss challenges and initiate the benchmark platform causeme.net to close the gap between method users and developers.
                Bookmark

                Author and article information

                Contributors
                Role: Academic Editor
                Journal
                Entropy (Basel)
                Entropy (Basel)
                entropy
                Entropy
                MDPI
                1099-4300
                25 January 2021
                February 2021
                : 23
                : 2
                : 143
                Affiliations
                [1 ]Department of Physics, Faculty of Science, University of Zagreb, Bijenička cesta 32, 10000 Zagreb, Croatia; domjanbaric@ 123456gmail.com (D.B.); pfumic@ 123456phy.hr (P.F.)
                [2 ]Division of Electronics, Ruđer Bošković Institute, Bijenička cesta 54, 10000 Zagreb, Croatia
                Author notes
                [* ]Correspondence: davorh@ 123456phy.hr (D.H.); tomislav.lipic@ 123456irb.hr (T.L.)
                Author information
                https://orcid.org/0000-0002-0411-8474
                https://orcid.org/0000-0002-8037-8198
                Article
                entropy-23-00143
                10.3390/e23020143
                7912396
                33503822
                4619cb80-140e-4489-8bf8-a43ef52c0235
                © 2021 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 19 December 2020
                : 21 January 2021
                Categories
                Article

                multivariate time series,attention mechanism,interpretability,synthetically designed datasets

                Comments

                Comment on this article