5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Machine learning toward advanced energy storage devices and systems

      review-article
      1 , 1 , 2 ,
      iScience
      Elsevier
      Applied Computing, Energy Storage, Materials Design

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Summary

          Technology advancement demands energy storage devices (ESD) and systems (ESS) with better performance, longer life, higher reliability, and smarter management strategy. Designing such systems involve a trade-off among a large set of parameters, whereas advanced control strategies need to rely on the instantaneous status of many indicators. Machine learning can dramatically accelerate calculations, capture complex mechanisms to improve the prediction accuracy, and make optimized decisions based on comprehensive status information. The computational efficiency makes it applicable for real-time management. This paper reviews recent progresses in this emerging area, especially new concepts, approaches, and applications of machine learning technologies for commonly used energy storage devices (including batteries, capacitors/supercapacitors, fuel cells, other ESDs) and systems (including battery ESS, hybrid ESS, grid and microgrid-containing energy storage units, pumped-storage system, thermal ESS). The perspective on future directions is also discussed.

          Graphical Abstract

          Abstract

          Applied Computing; Energy Storage; Materials Design

          Related collections

          Most cited references146

          • Record: found
          • Abstract: found
          • Article: not found

          Long Short-Term Memory

          Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            SMOTE: Synthetic Minority Over-sampling Technique

            An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of ``normal'' examples with only a small percentage of ``abnormal'' or ``interesting'' examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A fast learning algorithm for deep belief nets.

              We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
                Bookmark

                Author and article information

                Contributors
                Journal
                iScience
                iScience
                iScience
                Elsevier
                2589-0042
                13 December 2020
                22 January 2021
                13 December 2020
                : 24
                : 1
                : 101936
                Affiliations
                [1 ]Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
                [2 ]Department of Materials Science & Engineering, University of Michigan, Ann Arbor, MI 48109, USA
                Author notes
                []Corresponding author weilu@ 123456umich.edu
                Article
                S2589-0042(20)31133-0 101936
                10.1016/j.isci.2020.101936
                7797524
                33458608
                172d2851-3ba6-4a15-b50f-e5d8106f3f9b
                © 2020 The Author(s)

                This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

                History
                Categories
                Review

                applied computing,energy storage,materials design
                applied computing, energy storage, materials design

                Comments

                Comment on this article