10
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Time-average optimal constrained semi-Markov decision processes

      ,
      Advances in Applied Probability
      JSTOR

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Optimal causal policies maximizing the time-average reward over a semi-Markov decision process (SMDP), subject to a hard constraint on a time-average cost, are considered. Rewards and costs depend on the state and action, and contain running as well as switching components. It is supposed that the state space of the SMDP is finite, and the action space compact metric. The policy determines an action at each transition point of the SMDP.

          Under an accessibility hypothesis, several notions of time average are equivalent. A Lagrange multiplier formulation involving a dynamic programming equation is utilized to relate the constrained optimization to an unconstrained optimization parametrized by the multiplier. This approach leads to a proof for the existence of a semi-simple optimal constrained policy. That is, there is at most one state for which the action is randomized between two possibilities; at all other states, an action is uniquely chosen for each state. Affine forms for the rewards, costs and transition probabilities further reduce the optimal constrained policy to ‘almost bang-bang’ form, in which the optimal policy is not randomized, and is bang-bang except perhaps at one state. Under the same assumptions, one can alternatively find an optimal constrained policy that is strictly bang-bang, but may be randomized at one state. Application is made to flow control of a birth-and-death process (e.g., an M/M/s queue); under certain monotonicity restrictions on the reward and cost structure the preceding results apply, and in addition there is a simple acceptance region.

          Related collections

          Most cited references12

          • Record: found
          • Abstract: not found
          • Book: not found

          Stochastic Processes

            Bookmark
            • Record: found
            • Abstract: not found
            • Book: not found

            Reversibility and Stochastic Networks

              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Introduction to Stochastic Processes

                Bookmark

                Author and article information

                Journal
                Advances in Applied Probability
                Advances in Applied Probability
                JSTOR
                0001-8678
                1475-6064
                June 1986
                July 01 2016
                June 1986
                : 18
                : 2
                : 341-359
                Article
                10.2307/1427303
                a650859e-734f-40a6-849e-d3c61699ed4a
                © 1986

                https://www.cambridge.org/core/terms

                History

                Comments

                Comment on this article