1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Application of wavelet-packet transform driven deep learning method in PM2.5 concentration prediction: A case study of Qingdao, China

      , , , , , ,
      Sustainable Cities and Society
      Elsevier BV

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references63

          • Record: found
          • Abstract: found
          • Article: found

          Improved protein structure prediction using potentials from deep learning

          Protein structure prediction can be used to determine the three-dimensional shape of a protein from its amino acid sequence1. This problem is of fundamental importance as the structure of a protein largely determines its function2; however, protein structures can be difficult to determine experimentally. Considerable progress has recently been made by leveraging genetic information. It is possible to infer which amino acid residues are in contact by analysing covariation in homologous sequences, which aids in the prediction of protein structures3. Here we show that we can train a neural network to make accurate predictions of the distances between pairs of residues, which convey more information about the structure than contact predictions. Using this information, we construct a potential of mean force4 that can accurately describe the shape of a protein. We find that the resulting potential can be optimized by a simple gradient descent algorithm to generate structures without complex sampling procedures. The resulting system, named AlphaFold, achieves high accuracy, even for sequences with fewer homologous sequences. In the recent Critical Assessment of Protein Structure Prediction5 (CASP13)-a blind assessment of the state of the field-AlphaFold created high-accuracy structures (with template modelling (TM) scores6 of 0.7 or higher) for 24 out of 43 free modelling domains, whereas the next best method, which used sampling and contact information, achieved such accuracy for only 14 out of 43 domains. AlphaFold represents a considerable advance in protein-structure prediction. We expect this increased accuracy to enable insights into the function and malfunction of proteins, especially in cases for which no structures for homologous proteins have been experimentally determined7.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A Comprehensive Survey on Graph Neural Networks

            Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications, where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on the existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this article, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNs, convolutional GNNs, graph autoencoders, and spatial-temporal GNNs. We further discuss the applications of GNNs across various domains and summarize the open-source codes, benchmark data sets, and model evaluation of GNNs. Finally, we propose potential research directions in this rapidly growing field.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Dropout: A Simple Way to Prevent Neural Networks from Overfitting.

                Bookmark

                Author and article information

                Contributors
                Journal
                Sustainable Cities and Society
                Sustainable Cities and Society
                Elsevier BV
                22106707
                May 2023
                May 2023
                : 92
                : 104486
                Article
                10.1016/j.scs.2023.104486
                ecb15b7e-ab26-4651-a6d3-98272b51163e
                © 2023

                https://www.elsevier.com/tdm/userlicense/1.0/

                https://doi.org/10.15223/policy-017

                https://doi.org/10.15223/policy-037

                https://doi.org/10.15223/policy-012

                https://doi.org/10.15223/policy-029

                https://doi.org/10.15223/policy-004

                History

                Comments

                Comment on this article