9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Orchestrating the Development Lifecycle of Machine Learning-based IoT Applications : A Taxonomy and Survey

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Machine Learning (ML) and Internet of Things (IoT) are complementary advances: ML techniques unlock the potential of IoT with intelligence, and IoT applications increasingly feed data collected by sensors into ML models, thereby employing results to improve their business processes and services. Hence, orchestrating ML pipelines that encompass model training and implication involved in the holistic development lifecycle of an IoT application often leads to complex system integration. This article provides a comprehensive and systematic survey of the development lifecycle of ML-based IoT applications. We outline the core roadmap and taxonomy and subsequently assess and compare existing standard techniques used at individual stages.

          Related collections

          Most cited references168

          • Record: found
          • Abstract: not found
          • Article: not found

          Algorithm AS 136: A K-Means Clustering Algorithm

            Bookmark
            • Record: found
            • Abstract: not found
            • Book: not found

            Classification And Regression Trees

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

              Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
                Bookmark

                Author and article information

                Journal
                ACM Computing Surveys
                ACM Comput. Surv.
                Association for Computing Machinery (ACM)
                0360-0300
                1557-7341
                September 26 2020
                September 26 2020
                : 53
                : 4
                : 1-47
                Affiliations
                [1 ]Newcastle University, UK
                [2 ]University of Leeds, UK
                [3 ]The University of Sydney, Australia
                [4 ]Cardiff University, UK
                [5 ]China University of Geoscience (Wuhan), China
                Article
                10.1145/3398020
                eb26fd8a-a519-45b7-8364-583eaeb4e1e6
                © 2020
                History

                Comments

                Comment on this article