32
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      The relative performance of ensemble methods with deep convolutional neural networks for image classification

      1 , 1 , 1
      Journal of Applied Statistics
      Informa UK Limited

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          <p class="first" id="P1">Artificial neural networks have been successfully applied to a variety of machine learning tasks, including image recognition, semantic segmentation, and machine translation. However, few studies fully investigated ensembles of artificial neural networks. In this work, we investigated multiple widely used ensemble methods, including unweighted averaging, majority voting, the Bayes Optimal Classifier, and the (discrete) Super Learner, for image recognition tasks, with deep neural networks as candidate algorithms. We designed several experiments, with the candidate algorithms being the same network structure with different model checkpoints within a single training process, networks with same structure but trained multiple times stochastically, and networks with different structure. In addition, we further studied the over-confidence phenomenon of the neural networks, as well as its impact on the ensemble methods. Across all of our experiments, the Super Learner achieved best performance among all the ensemble methods in this study. </p>

          Related collections

          Most cited references12

          • Record: found
          • Abstract: not found
          • Article: not found

          Stacked generalization

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Limits on the majority vote accuracy in classifier fusion

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found
              Is Open Access

              Super-Learning of an Optimal Dynamic Treatment Rule

              We consider the estimation of an optimal dynamic two time-point treatment rule defined as the rule that maximizes the mean outcome under the dynamic treatment, where the candidate rules are restricted to depend only on a user-supplied subset of the baseline and intermediate covariates. This estimation problem is addressed in a statistical model for the data distribution that is nonparametric, beyond possible knowledge about the treatment and censoring mechanisms. We propose data adaptive estimators of this optimal dynamic regime which are defined by sequential loss-based learning under both the blip function and weighted classification frameworks. Rather than a priori selecting an estimation framework and algorithm, we propose combining estimators from both frameworks using a super-learning based cross-validation selector that seeks to minimize an appropriate cross-validated risk. The resulting selector is guaranteed to asymptotically perform as well as the best convex combination of candidate algorithms in terms of loss-based dissimilarity under conditions. We offer simulation results to support our theoretical findings.
                Bookmark

                Author and article information

                Journal
                Journal of Applied Statistics
                Journal of Applied Statistics
                Informa UK Limited
                0266-4763
                1360-0532
                February 15 2018
                November 18 2018
                February 26 2018
                November 18 2018
                : 45
                : 15
                : 2800-2818
                Affiliations
                [1 ] University of California, Berkeley, CA, USA
                Article
                10.1080/02664763.2018.1441383
                6800663
                31631918
                514cbfd0-8858-4105-bd20-012a4da47335
                © 2018
                History

                Comments

                Comment on this article