5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      A deep residual compensation extreme learning machine and applications

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The extreme learning machine (ELM) is a type of machine learning algorithm for training a single hidden layer feedforward neural network. Randomly initializing the weight between the input layer and the hidden layer and the threshold of each hidden layer neuron, the weight matrix of the hidden layer can be calculated by the least squares method. The efficient learning ability in ELM makes it widely applicable in classification, regression, and more. However, owing to some unutilized information in the residual, there are relatively huge prediction errors involving ELM. In this paper, a deep residual compensation extreme learning machine model (DRC‐ELM) of multilayer structures applied to regression is presented. The first layer is the basic ELM layer, which helps in obtaining an approximation of the objective function by learning the characteristics of the sample. The other layers are the residual compensation layers in which the learned residual is corrected layer by layer to the predicted value obtained in the previous layer by constructing a feature mapping between the input layer and the output of the upper layer. This model is applied to two practical problems: gold price forecasting and airfoil self‐noise prediction. We used the DRC‐ELM with 50, 100, and 200 residual compensation layers respectively for experiments, which show that DRC‐ELM does better in generalization and robustness than classical ELM, improved ELM models such as GA‐RELM and OS‐ELM, and other traditional machine learning algorithms such as support vector machine (SVM) and back‐propagation neural network (BPNN).

          Related collections

          Most cited references49

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            AIC model selection using Akaike weights.

            The Akaike information criterion (AIC; Akaike, 1973) is a popular method for comparing the adequacy of multiple, possibly nonnested models. Current practice in cognitive psychology is to accept a single model on the basis of only the "raw" AIC values, making it difficult to unambiguously interpret the observed AIC differences in terms of a continuous measure such as probability. Here we demonstrate that AIC values can be easily transformed to so-called Akaike weights (e.g., Akaike, 1978, 1979; Bozdogan, 1987; Burnham & Anderson, 2002), which can be directly interpreted as conditional probabilities for each model. We show by example how these Akaike weights can greatly facilitate the interpretation of the results of AIC model comparison procedures.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Extreme learning machine for regression and multiclass classification.

              Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.
                Bookmark

                Author and article information

                Contributors
                Journal
                Journal of Forecasting
                Journal of Forecasting
                Wiley
                0277-6693
                1099-131X
                September 2020
                February 17 2020
                September 2020
                : 39
                : 6
                : 986-999
                Affiliations
                [1 ] College of Mathematics and Statistics Central South University Changsha China
                [2 ] School of Mathematics and Statistics Hunan University of Technology and Business Changsha China
                [3 ] College of Finance and Statistics Hunan University Changsha China
                Article
                10.1002/for.2663
                6fa6565e-88fe-45be-acbe-6e5295ec7a26
                © 2020

                http://onlinelibrary.wiley.com/termsAndConditions#vor

                History

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content223

                Cited by12

                Most referenced authors513