Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Optimizing hydropower scheduling through accurate power load prediction: A practical case study

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Hydropower stations that are part of the grid system frequently encounter challenges related to the uneven distribution of power generation and associated benefits, primarily stemming from delays in obtaining timely load data. This research addresses this issue by developing a scheduling model that combines power load prediction and dual-objective optimization. The practical application of this model is demonstrated in a real-case scenario, focusing on the Shatuo Hydropower Station in China. In contrast to current models, the suggested model can achieve optimal dispatch for grid-connected hydropower stations even when power load data is unavailable. Initially, the model assesses various prediction models for estimating power load and subsequently incorporates the predictions into the GA-NSGA-II algorithm, specifically an enhanced elite non-dominated sorting genetic algorithm. This integration is performed while considering the proposed objective functions to optimize the discharge flow of the hydropower station. The outcomes reveal that the CNN-GRU model, denoting Convolutional Neural Network-Gated Recursive Unit, exhibits the highest prediction accuracy, achieving R-squared and RMSE (i.e., Root Mean Square Error) values of 0.991 and 0.026, respectively. The variance between scheduling based on predicted load values and actual load values is minimal, staying within 5 ( m3/s ), showcasing practical effectiveness. The optimized scheduling outcomes in the real case study yield dual advantages, meeting both the demands of ship navigation and hydropower generation, thus achieving a harmonious balance between the two requirements. This approach addresses the real-world challenges associated with delayed load data collection and insufficient scheduling, offering an efficient solution for managing hydropower station scheduling to meet both power generation and navigation needs.

          Graphical abstract

          Highlights

          • New dispatch model integrates load prediction & dual-objective optimization for efficient hydropower scheduling, even with load data missing.

          • Hydropower optimization considers station constraints, prioritizing efficiency for optimal dispatching.

          • Model applied in Shatuo Hydropower Station, showcasing real-world performance & applicability without load data.

          Related collections

          Most cited references26

          • Record: found
          • Abstract: not found
          • Article: not found

          A tutorial on support vector regression

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Review of deep learning: concepts, CNN architectures, challenges, applications, future directions

            In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures

              Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the cell structure, the long short-term memory (LSTM) could handle the problem of long-term dependencies well. Since its introduction, almost all the exciting results based on RNNs have been achieved by the LSTM. The LSTM has become the focus of deep learning. We review the LSTM cell and its variants to explore the learning capacity of the LSTM cell. Furthermore, the LSTM networks are divided into two broad categories: LSTM-dominated networks and integrated LSTM networks. In addition, their various applications are discussed. Finally, future research directions are presented for LSTM networks.
                Bookmark

                Author and article information

                Contributors
                Journal
                Heliyon
                Heliyon
                Heliyon
                Elsevier
                2405-8440
                21 March 2024
                15 April 2024
                21 March 2024
                : 10
                : 7
                : e28312
                Affiliations
                [a ]Guizhou Wujiang River Navigation Authority, Tongren, 565100, Guizhou, China
                [b ]College of Mathematics and Information Science, Hebei University, Baoding, 071002, Hebei, China
                [c ]Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, China
                [d ]Guizhou Silin Navigation Authority, Tongren, 565100, Guizhou, China
                [e ]Guizhou Zhongnan Transport Technology co. Ltd., Guiyang, 550000, Guizhou, China
                Author notes
                [* ]Corresponding author. qiang@ 123456siat.ac.cn
                Article
                S2405-8440(24)04343-3 e28312
                10.1016/j.heliyon.2024.e28312
                10987994
                38571578
                0876266b-9c3c-4472-917c-5cdd8c4d3570
                © 2024 The Author(s)

                This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).

                History
                : 18 June 2023
                : 11 March 2024
                : 15 March 2024
                Categories
                Research Article

                0000,1111,multi-objective optimization,scheduling strategy,hydropower station,prediction algorithm,neural networks,deep learning

                Comments

                Comment on this article