8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Blood Glucose Prediction with Variance Estimation Using Recurrent Neural Networks

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Many factors affect blood glucose levels in type 1 diabetics, several of which vary largely both in magnitude and delay of the effect. Modern rapid-acting insulins generally have a peak time after 60–90 min, while carbohydrate intake can affect blood glucose levels more rapidly for high glycemic index foods, or slower for other carbohydrate sources. It is important to have good estimates of the development of glucose levels in the near future both for diabetic patients managing their insulin distribution manually, as well as for closed-loop systems making decisions about the distribution. Modern continuous glucose monitoring systems provide excellent sources of data to train machine learning models to predict future glucose levels. In this paper, we present an approach for predicting blood glucose levels for diabetics up to 1 h into the future. The approach is based on recurrent neural networks trained in an end-to-end fashion, requiring nothing but the glucose level history for the patient. Our approach obtains results that are comparable to the state of the art on the Ohio T1DM dataset for blood glucose level prediction. In addition to predicting the future glucose value, our model provides an estimate of its certainty, helping users to interpret the predicted levels. This is realized by training the recurrent neural network to parameterize a univariate Gaussian distribution over the output. The approach needs no feature engineering or data preprocessing and is computationally inexpensive. We evaluate our method using the standard root-mean-squared error (RMSE) metric, along with a blood glucose-specific metric called the surveillance error grid (SEG). We further study the properties of the distribution that is learned by the model, using experiments that determine the nature of the certainty estimate that the model is able to capture.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: found
          • Article: not found

          Long Short-Term Memory

          Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Learning long-term dependencies with gradient descent is difficult.

            Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions

                Bookmark

                Author and article information

                Contributors
                john.martinsson@gmail.com
                alexander@schlieplab.org
                bjorn.eliasson@gu.se
                olof@mogren.one
                Journal
                J Healthc Inform Res
                J Healthc Inform Res
                Journal of Healthcare Informatics Research
                Springer International Publishing (Cham )
                2509-4971
                2509-498X
                1 December 2019
                1 December 2019
                March 2020
                : 4
                : 1
                : 1-18
                Affiliations
                [1 ]GRID grid.450998.9, ISNI 0000000106922258, RISE Research Institutes of Sweden, ; Gothenburg, Sweden
                [2 ]GRID grid.8761.8, ISNI 0000 0000 9919 9582, Gothenburg University, ; Gothenburg, Sweden
                [3 ]GRID grid.1649.a, ISNI 000000009445082X, Sahlgrenska University Hospital, ; Gothenburg, Sweden
                Author information
                http://orcid.org/0000-0002-9567-2218
                Article
                59
                10.1007/s41666-019-00059-y
                8982803
                35415439
                14fb218c-8588-49b8-a000-406916e6749f
                © The Author(s) 2019

                Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

                History
                : 12 December 2018
                : 26 April 2019
                : 18 October 2019
                Categories
                Research Article
                Custom metadata
                © Springer Nature Switzerland AG 2020

                recurrent neural networks,blood glucose prediction,type 1 diabetes

                Comments

                Comment on this article