79
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Constructing a Large-Scale Urban Land Subsidence Prediction Method Based on Neural Network Algorithm from the Perspective of Multiple Factors

      , ,
      Remote Sensing
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The existing neural network model in urban land-subsidence prediction is over-reliant on historical subsidence data. It cannot accurately capture or predict the fluctuation in the sequence deformation, while the improper selection of training samples directly affects its final prediction accuracy for large-scale urban land subsidence. In response to the shortcomings of previous urban land-subsidence predictions, a subsidence prediction method based on a neural network algorithm was constructed in this study, from a multi-factorial perspective. Furthermore, the scientific selection of a large range of training samples was controlled using a K-shape clustering algorithm in order to produce this high-precision urban land subsidence prediction method. Specifically, the main urban area of Kunming city was taken as the research object, LiCSBAS technology was adopted to obtain the information on the land-subsidence deformation in the main urban area of Kunming city from 2018–2021, and the relationship between the land subsidence and its influencing factors was revealed through a grey correlation analysis. Hydrogeology, geological structure, fault, groundwater, high-speed railways, and high-rise buildings were selected as the influencing factors. Reliable subsidence training samples were obtained by using the time-series clustering K-shape algorithm. Particle swarm optimization–back propagation (PSO-BP) was constructed from a multi-factorial perspective. Additionally, after the neural network algorithm was employed to predict the urban land subsidence, the fluctuation in the urban land-subsidence sequence deformation was predicted with the LSTM neural network from a multi-factorial perspective. Finally, the large-scale urban land-subsidence prediction was performed. The results demonstrate that the maximum subsidence rate in the main urban area of Kunming reached −30.591 mm⋅a−1 between 2018 and 2021. Moreover, there were four main significant subsidence areas in the whole region, with uneven distribution characteristics along Dianchi: within the range of 200–600 m from large commercial areas and high-rise buildings, within the range of 400–1200 m from the under-construction subway, and within the annual average. The land subsidence tended to occur within the range of 109–117 mm of annual average rainfall. Furthermore, the development of faults destroys the stability of the soil structure and further aggravates the land subsidence. Hydrogeology, geological structure, and groundwater also influence the land subsidence in the main urban area of Kunming. The reliability of the training sample selection can be improved by clustering the subsidence data with the K-shape algorithm, and the constructed multi-factorial PSO-BP method can effectively predict the subsidence rate with a mean squared error (MSE) of 4.820 mm. The prediction accuracy was slightly improved compared to the non-clustered prediction. We used the constructed multi-factorial long short-term memory (LSTM) model to predict the next ten periods of any time-series subsidence data in the three types of cluster data (Cluster 1, Cluster 2, and Cluster 3). The root mean square errors (RMSE) were 0.445, 1.475, and 1.468 mm; the absolute error ranges were 0.007–1.030, 0–3.001, and 0.401–3.679 mm; the errors (mean absolute error, MAE) were 0.319, 1.214, and 1.167 mm, respectively. Their prediction accuracy was significantly improved, and the predictions met the measurement specifications. Overall, the prediction method proposed from the multi-factorial perspective improves large-scale, high-accuracy urban land-subsidence prediction.

          Related collections

          Most cited references46

          • Record: found
          • Abstract: found
          • Article: not found

          Long Short-Term Memory

          Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Deep learning in neural networks: An overview

            In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network

                Bookmark

                Author and article information

                Contributors
                Journal
                Remote Sensing
                Remote Sensing
                MDPI AG
                2072-4292
                April 2022
                April 08 2022
                : 14
                : 8
                : 1803
                Article
                10.3390/rs14081803
                17372ed2-df05-41a4-bdea-2a2b4a5b32cd
                © 2022

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content65

                Cited by5

                Most referenced authors419