73
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework

      research-article
      , *
      PLoS Computational Biology
      Public Library of Science

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper introduces a time- and state-dependent measure of integrated information, φ, which captures the repertoire of causal states available to a system as a whole. Specifically, φ quantifies how much information is generated (uncertainty is reduced) when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i) there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii) this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i) φ varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii) φ varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii) φ varies as a function of network architecture. High φ values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high φ because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high φ but are inefficient. (iv) In Hopfield networks, φ is low for attractor states and neutral states, but increases if the networks are optimized to achieve tension between local and global interactions. These basic examples appear to match well against neurobiological evidence concerning the neural substrates of consciousness. More generally, φ appears to be a useful metric to characterize the capacity of any physical system to integrate information.

          Author Summary

          We have suggested that consciousness has to do with a system's capacity to generate  integrated information. This suggestion stems from considering two basic properties of consciousness: (i) each conscious experience generates a large amount of information, by ruling out alternative experiences; and (ii) the information is integrated, meaning that it cannot be decomposed into independent parts. We introduce a measure that quantifies how much integrated information is generated by a discrete dynamical system in the process of transitioning from one state to the next. The measure captures the information generated by the causal interactions among the elements of the system, above and beyond the information generated independently by its parts. We present numerical analyses of basic examples, which match well against neurobiological evidence concerning the neural substrates of consciousness. The framework establishes an observer-independent view of information by taking an intrinsic perspective on interactions.

          Related collections

          Most cited references23

          • Record: found
          • Abstract: found
          • Article: not found

          Neural networks and physical systems with emergent collective computational abilities.

          J Hopfield (1982)
          Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            An information integration theory of consciousness

            Background Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition? Presentation of the hypothesis This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation – the availability of a very large number of conscious experiences; and integration – the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the Φ value of a complex of elements. Φ is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with Φ>0 that is not part of a subset of higher Φ. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex. Testing the hypothesis The information integration theory accounts, in a principled manner, for several neurobiological observations concerning consciousness. As shown here, these include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness. Implications of the hypothesis The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Turning on and off recurrent balanced cortical activity.

              The vast majority of synaptic connections onto neurons in the cerebral cortex arise from other cortical neurons, both excitatory and inhibitory, forming local and distant 'recurrent' networks. Although this is a basic theme of cortical organization, its study has been limited largely to theoretical investigations, which predict that local recurrent networks show a proportionality or balance between recurrent excitation and inhibition, allowing the generation of stable periods of activity. This recurrent activity might underlie such diverse operations as short-term memory, the modulation of neuronal excitability with attention, and the generation of spontaneous activity during sleep. Here we show that local cortical circuits do indeed operate through a proportional balance of excitation and inhibition generated through local recurrent connections, and that the operation of such circuits can generate self-sustaining activity that can be turned on and off by synaptic inputs. These results confirm the long-hypothesized role of recurrent activity as a basic operation of the cerebral cortex.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS Comput Biol
                plos
                ploscomp
                PLoS Computational Biology
                Public Library of Science (San Francisco, USA )
                1553-734X
                1553-7358
                June 2008
                June 2008
                13 June 2008
                : 4
                : 6
                : e1000091
                Affiliations
                [1]Department of Psychiatry, University of Wisconsin, Madison, Wisconsin, United States of America
                Indiana University, United States of America
                Author notes

                Analyzed the data: DB GT. Wrote the paper: DB GT.

                Article
                07-PLCB-RA-0820R4
                10.1371/journal.pcbi.1000091
                2386970
                18551165
                45e3d671-eb0f-43f9-80c9-cb43149ec4d0
                Balduzzi, Tononi. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
                History
                : 26 December 2007
                : 29 April 2008
                Page count
                Pages: 18
                Categories
                Research Article
                Computational Biology
                Computational Biology/Computational Neuroscience
                Neuroscience/Theoretical Neuroscience

                Quantitative & Systems biology
                Quantitative & Systems biology

                Comments

                Comment on this article