6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Brain serotonergic fibers suggest anomalous diffusion-based dropout in artificial neural networks

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Random dropout has become a standard regularization technique in artificial neural networks (ANNs), but it is currently unknown whether an analogous mechanism exists in biological neural networks (BioNNs). If it does, its structure is likely to be optimized by hundreds of millions of years of evolution, which may suggest novel dropout strategies in large-scale ANNs. We propose that the brain serotonergic fibers (axons) meet some of the expected criteria because of their ubiquitous presence, stochastic structure, and ability to grow throughout the individual’s lifespan. Since the trajectories of serotonergic fibers can be modeled as paths of anomalous diffusion processes, in this proof-of-concept study we investigated a dropout algorithm based on the superdiffusive fractional Brownian motion (FBM). The results demonstrate that serotonergic fibers can potentially implement a dropout-like mechanism in brain tissue, supporting neuroplasticity. They also suggest that mathematical theories of the structure and dynamics of serotonergic fibers can contribute to the design of dropout algorithms in ANNs.

          Related collections

          Most cited references50

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Review of deep learning: concepts, CNN architectures, challenges, applications, future directions

          In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Receptive fields, binocular interaction and functional architecture in the cat's visual cortex.

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The temporal structures and functional significance of scale-free brain activity.

              Scale-free dynamics, with a power spectrum following P proportional to f(-beta), are an intrinsic feature of many complex processes in nature. In neural systems, scale-free activity is often neglected in electrophysiological research. Here, we investigate scale-free dynamics in human brain and show that it contains extensive nested frequencies, with the phase of lower frequencies modulating the amplitude of higher frequencies in an upward progression across the frequency spectrum. The functional significance of scale-free brain activity is indicated by task performance modulation and regional variation, with beta being larger in default network and visual cortex and smaller in hippocampus and cerebellum. The precise patterns of nested frequencies in the brain differ from other scale-free dynamics in nature, such as earth seismic waves and stock market fluctuations, suggesting system-specific generative mechanisms. Our findings reveal robust temporal structures and behavioral significance of scale-free brain activity and should motivate future study on its physiological mechanisms and cognitive implications. Copyright 2010 Elsevier Inc. All rights reserved.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                04 October 2022
                2022
                : 16
                : 949934
                Affiliations
                [1] 1Department of Electrical and Computer Engineering, University of California, Santa Barbara , Santa Barbara, CA, United States
                [2] 2Department of Psychological and Brain Sciences, University of California, Santa Barbara , Santa Barbara, CA, United States
                Author notes

                Edited by: Jonathan Mapelli, University of Modena and Reggio Emilia, Italy

                Reviewed by: Ilias Rentzeperis, Université Paris-Saclay (CNRS), France; Ahana Gangopadhyay, Washington University in St. Louis, United States

                *Correspondence: Skirmantas Janušonis, janusonis@ 123456ucsb.edu

                This article was submitted to Neural Technology, a section of the journal Frontiers in Neuroscience

                Article
                10.3389/fnins.2022.949934
                9577023
                36267232
                cfeb2107-a2b9-4e18-bfd1-3da6cbaf3ca0
                Copyright © 2022 Lee, Zhang and Janušonis.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 21 May 2022
                : 08 September 2022
                Page count
                Figures: 4, Tables: 0, Equations: 0, References: 50, Pages: 9, Words: 6318
                Funding
                Funded by: National Science Foundation , doi 10.13039/100000001;
                Funded by: National Science Foundation , doi 10.13039/100000001;
                Funded by: National Science Foundation , doi 10.13039/100000001;
                Funded by: National Institute of Mental Health , doi 10.13039/100000025;
                Categories
                Neuroscience
                Original Research

                Neurosciences
                artificial neural networks,convolutional neural networks,dropout,regularization,serotonergic,stochastic,anomalous diffusion,fractional brownian motion

                Comments

                Comment on this article