0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      SoC-VRP: A Deep-Reinforcement-Learning-Based Vehicle Route Planning Mechanism for Service-Oriented Cooperative ITS

      , , , , , ,
      Electronics
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          With the rapid development of emerging information technology and its increasing integration with transportation systems, the Intelligent Transportation System (ITS) is entering a new phase, called Cooperative ITS (C-ITS). It offers promising solutions to numerous challenges in traditional transportation systems, among which the Vehicle Routing Problem (VRP) is a significant concern addressed in this work. Considering the varying urgency levels of different vehicles and their different traveling constraints in the Service-oriented Cooperative ITS (SoC-ITS) framework studied in our previous research, the Service-oriented Cooperative Vehicle Routing Problem (SoC-VRP) is firstly analyzed, in which cooperative planning and vehicle urgency degrees are two vital factors. After examining the characteristics of both VRP and SoC-VRP, a Deep Reinforcement Learning (DRL)-based prioritized route planning mechanism is proposed. Specifically, we establish a deep reinforcement learning model with Rainbow DQN and devise a prioritized successive decision-making route planning method for SoC-ITS, where vehicle urgency degrees are mapped to three priorities: High for emergency vehicles, Medium for shuttle buses, and Low for the rest. All proposed models and methods are implemented, trained using various scenarios on typical road networks, and verified with SUMO-based scenes. Experimental results demonstrate the effectiveness of this hybrid prioritized route planning mechanism.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          Human-level control through deep reinforcement learning.

          The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            OpenStreetMap: User-Generated Street Maps

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Deep Reinforcement Learning with Double Q-Learning

              The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.
                Bookmark

                Author and article information

                Contributors
                Journal
                ELECGJ
                Electronics
                Electronics
                MDPI AG
                2079-9292
                October 2023
                October 10 2023
                : 12
                : 20
                : 4191
                Article
                10.3390/electronics12204191
                24130ea8-55bb-4297-ad45-1f43fa6247b4
                © 2023

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article