Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
42
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Simplified and Yet Turing Universal Spiking Neural P Systems with Communication on Request

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Spiking neural P systems are a class of third generation neural networks belonging to the framework of membrane computing. Spiking neural P systems with communication on request (SNQ P systems) are a type of spiking neural P system where the spikes are requested from neighboring neurons. SNQ P systems have previously been proved to be universal (computationally equivalent to Turing machines) when two types of spikes are considered. This paper studies a simplified version of SNQ P systems, i.e. SNQ P systems with one type of spike. It is proved that one type of spike is enough to guarantee the Turing universality of SNQ P systems. Theoretical results are shown in the cases of the SNQ P system used in both generating and accepting modes. Furthermore, the influence of the number of unbounded neurons (the number of spikes in a neuron is not bounded) on the computation power of SNQ P systems with one type of spike is investigated. It is found that SNQ P systems functioning as number generating devices with one type of spike and four unbounded neurons are Turing universal.

          Related collections

          Most cited references60

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Information processing using a single dynamical node as complex system

          Nonlinear systems with delayed feedback and/or delayed coupling, often simply put as 'delay systems', are a class of dynamical systems that have attracted considerable attention, both because of their fundamental interest and because they arise in a variety of real-life systems1. It has been shown that delay has an ambivalent impact on the dynamical behaviour of systems, either stabilizing or destabilizing them2. Often it is sufficient to tune a single parameter (for example, the feedback strength) to access a variety of behaviours, ranging from stable via periodic and quasi-periodic oscillations to deterministic chaos3. From the point of view of applications, the dynamics of delay systems is gaining more and more interest. While initially it was considered more as a nuisance, it is now viewed as a resource that can be beneficially exploited. One of the simplest possible delay systems consists of a single nonlinear node whose dynamics is influenced by its own output a time τ in the past. Such a system is easy to implement, because it comprises only two elements, a nonlinear node and a delay loop. A well-studied example is found in optics: a semiconductor laser whose output light is fed back to the laser by an external mirror at a certain distance4. In this article, we demonstrate how the rich dynamical properties of delay systems can be beneficially employed for processing time-dependent signals, by appropriately modifying the concept of reservoir computing. Reservoir computing (RC)5 6 7 8 9 10 is a recently introduced, bio-inspired, machine-learning paradigm that exhibits state-of-the-art performance for processing empirical data. Tasks, which are deemed computationally hard, such as chaotic time series prediction7, or speech recognition11 12, amongst others, can be successfully performed. The main inspiration underlying RC is the insight that the brain processes information generating patterns of transient neuronal activity excited by input sensory signals13. Therefore, RC is mimicking neuronal networks. Traditional RC implementations are generally composed of three distinct parts: an input layer, the reservoir and an output layer, as illustrated in Figure 1a. The input layer feeds the input signals to the reservoir via fixed random weight connections. The reservoir usually consists of a large number of randomly interconnected nonlinear nodes, constituting a recurrent network, that is, a network that has internal feedback loops. Under the influence of input signals, the network exhibits transient responses. These transient responses are read out at the output layer via a linear weighted sum of the individual node states. The objective of RC is to implement a specific nonlinear transformation of the input signal or to classify the inputs. Classification involves the discrimination between a set of input data, for example, identifying features of images, voices, time series and so on. To perform its task, RC requires a training procedure. As recurrent networks are notoriously difficult to train, they were not widely used until the advent of RC. In RC, this problem is resolved by keeping the connections fixed. The only part of the system that is trained are the output layer weights. Thus, the training does not affect the dynamics of the reservoir itself. As a result of this training procedure, the system is capable to generalize, that is, process unseen inputs or attribute them to previously learned classes. To efficiently solve its tasks, a reservoir should satisfy several key properties. First, it should nonlinearly transform the input signal into a high-dimensional state space in which the signal is represented. This is achieved through the use of a large number of reservoir nodes that are connected to each other through the recurrent nonlinear dynamics of the reservoir. In practice, traditional RC architectures employ several hundreds/thousands of nonlinear reservoir nodes to obtain good performance. In Figure 2, we illustrate how such a nonlinear mapping to a high-dimensional state space facilitates separation (classification) of states14. Second, the dynamics of the reservoir should be such that it exhibits a fading memory (that is, a short-term memory): the reservoir state is influenced by inputs from the recent past, but independent of the inputs from the far past. This property is essential for processing temporal sequences (such as speech) for which only the recent history of the signal is important. Additionally, the results of RC computations must be reproducible and robust against noise. For this, the reservoir should exhibit sufficiently different dynamical responses to inputs belonging to different classes. At the same time, the reservoir should not be too sensitive: similar inputs should not be associated to different classes. These competing requirements define when a reservoir performs well. Typically, reservoirs depend on a few parameters (such as the feedback gain and so on) that must be adjusted to satisfy the above constraints. Experience shows that these requirements are satisfied when the reservoir operates (in the absence of input) in a stable regime, but not too far from a bifurcation point. Further introduction to RC, and in particular its connection with other approaches to machine learning, can be found in the Supplementary Discussion. In this article, we propose to implement a reservoir computer in which the usual structure of multiple connected nodes is replaced by a dynamical system comprising a nonlinear node subjected to delayed feedback. Mathematically, a key feature of time-continuous delay systems is that their state space becomes infinite dimensional. This is because their state at time t depends on the output of the nonlinear node during the continuous time interval [t–τ, t[, with τ being the delay time. The dynamics of the delay system remains finite dimensional in practice15, but exhibits the properties of high dimensionality and short-term memory. Therefore, delay systems fulfil the demands required of reservoirs for proper operation. Moreover, they seem very attractive systems to implement RC experimentally, as only few components are required to build them. Here we show that this intuition is correct. Excellent performance on benchmark tasks is obtained when the RC paradigm is adapted to delay systems. This shows that very simple dynamical systems have high-level information-processing capabilities. Results Delay systems as reservoir In this section, we present the conceptual basis of our scheme, followed by the main results obtained for the two tasks we considered: spoken digit recognition and dynamical system modelling. We start by presenting in Figure 1b the basic principle of our scheme. Within one delay interval of length τ, we define N equidistant points separated in time by θ=τ/N. We denote these N equidistant points as 'virtual nodes', as they have a role analogous to the one of the nodes in a traditional reservoir. The values of the delayed variable at each of the N points define the states of the virtual nodes. These states characterize the transient response of our reservoir to a certain input at a given time. The separation time θ among virtual nodes has an important role and can be used to optimize the reservoir performance. We chose θ 3, the NRMSE reaches a level of 0.4, which is the performance of a shift-register. Author contributions I.F., C.R.M., S.M., J.Dan., B.S., G.VdS., M.C.S. and J.Dam. have contributed to development and/or implementation of the concept. M.C.S. performed the experiments, partly assisted by L.A. and supervised by C.R.M. and I.F. L.A. performed the numerical simulations, supervised by G.VdS. and J.Dan. All authors contributed to the discussion of the results and to the writing of the manuscript. Additional information How to cite this article: Appeltant, L. et al. Information processing using a single dynamical node as complex system. Nat. Commun. 2:468 doi: 10.1038/ncomms1476 (2011). Supplementary Material Supplementary Information Supplementary Figures S1-S9, Supplementary Discussion and Supplementary References.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Networks of spiking neurons: The third generation of neural network models

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Computing with Membranes

                Bookmark

                Author and article information

                Journal
                International Journal of Neural Systems
                Int. J. Neur. Syst.
                World Scientific Pub Co Pte Ltd
                0129-0657
                1793-6462
                August 26 2018
                October 2018
                August 26 2018
                October 2018
                : 28
                : 08
                : 1850013
                Affiliations
                [1 ]Key Laboratory of Image Information Processing and Intelligent Control of Education Ministry of China, School of Automation, Huazhong University of Science and Technology, Wuhan, Hubei 430074, P. R. China
                [2 ]Department of Computer Science, Faculty of Mathematics and Computer Science, University of Bucharest, Str. Academiei Nr. 14, Sector 1, C.P. 010014, Bucharest, Romania
                [3 ]Department of Bioinformatics, National Institute of Research and Development for Biological Sciences, Splaiul Independenţei, Nr. 296, Sector 6, Bucharest, Romania
                [4 ]School of Electric and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, Henan 450002, P. R. China
                [5 ]Centre for Computational Intelligence, School of Computer Science and Informatics, De Montfort University, The Gateway, Leicester LE1 9BH, UK
                Article
                10.1142/S0129065718500132
                f535a1c7-9be6-49bf-9db5-ae59c9e4c5f0
                © 2018
                History

                Comments

                Comment on this article