74
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM ( hierarchical Bayesian modeling of Decision- Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations.

          Related collections

          Most cited references66

          • Record: found
          • Abstract: found
          • Article: not found

          The neural basis of loss aversion in decision-making under risk.

          People typically exhibit greater sensitivity to losses than to equivalent gains when making decisions. We investigated neural correlates of loss aversion while individuals decided whether to accept or reject gambles that offered a 50/50 chance of gaining or losing money. A broad set of areas (including midbrain dopaminergic regions and their targets) showed increasing activity as potential gains increased. Potential losses were represented by decreasing activity in several of these same gain-sensitive areas. Finally, individual differences in behavioral loss aversion were predicted by a measure of neural loss aversion in several regions, including the ventral striatum and prefrontal cortex.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A discounting framework for choice with delayed and probabilistic rewards.

            When choosing between delayed or uncertain outcomes, individuals discount the value of such outcomes on the basis of the expected time to or the likelihood of their occurrence. In an integrative review of the expanding experimental literature on discounting, the authors show that although the same form of hyperbola-like function describes discounting of both delayed and probabilistic outcomes, a variety of recent findings are inconsistent with a single-process account. The authors also review studies that compare discounting in different populations and discuss the theoretical and practical implications of the findings. The present effort illustrates the value of studying choice involving both delayed and probabilistic outcomes within a general discounting framework that uses similar experimental procedures and a common analytical approach. ((c) 2004 APA, all rights reserved)
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Decisions from experience and the effect of rare events in risky choice.

              When people have access to information sources such as newspaper weather forecasts, drug-package inserts, and mutual-fund brochures, all of which provide convenient descriptions of risky prospects, they can make decisions from description. When people must decide whether to back up their computer's hard drive, cross a busy street, or go out on a date, however, they typically do not have any summary description of the possible outcomes or their likelihoods. For such decisions, people can call only on their own encounters with such prospects, making decisions from experience. Decisions from experience and decisions from description can lead to dramatically different choice behavior. In the case of decisions from description, people make choices as if they overweight the probability of rare events, as described by prospect theory. We found that in the case of decisions from experience, in contrast, people make choices as if they underweight the probability of rare events, and we explored the impact of two possible causes of this underweighting--reliance on relatively small samples of information and overweighting of recently sampled information. We conclude with a call for two different theories of risky choice.
                Bookmark

                Author and article information

                Journal
                Comput Psychiatr
                Comput Psychiatr
                cpsy
                Computational Psychiatry (Cambridge, Mass.)
                MIT Press (One Rogers Street, Cambridge, MA 02142-1209USAjournals-info@mit.edu )
                2379-6227
                01 October 2017
                October 2017
                : 1
                : 24-57
                Affiliations
                [1 ]Department of Psychology, The Ohio State University, Columbus, OH
                [2 ]Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
                Author notes

                Competing Interests: The author declares no conflict of interest

                [* ]Corresponding author: wooyoung.ahn@ 123456gmail.com .
                Article
                CPSY_a_00002
                10.1162/CPSY_a_00002
                5869013
                29601060
                2e3ae005-df6f-4d66-b43b-54adfaf8b227
                © 2017 Massachusetts Institute of Technology Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 20 July 2016
                : 06 March 2017
                Funding
                Funded by: German Research Foundation, FundRef ;
                Award ID: DFG GRK 1247
                Funded by: Bernstein Computational Neuroscience Program of the German Federal Ministry of Education and Research, FundRef ;
                Award ID: BMBF Grant 01GQ1006
                Categories
                Research
                Custom metadata
                Ahn, W.-Y., Haines, N., & Zhang, L. (2017). Revealing neurocomputational mechanisms of reinforcement learning and decision-making with the hBayesDM package. Computational Psychiatry, 1, 24–57. https://doi.org/10.1162/cpsy_a_00002

                reinforcement learning,decision-making,hierarchical bayesian modeling,model-based fmri

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content54

                Cited by121

                Most referenced authors1,228