6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Infection diagnosis in hydrocephalus CT images: a domain enriched attention learning approach

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objective.

          Hydrocephalus is the leading indication for pediatric neurosurgical care worldwide. Identification of postinfectious hydrocephalus (PIH) verses non-postinfectious hydrocephalus, as well as the pathogen involved in PIH is crucial for developing an appropriate treatment plan. Accurate identification requires clinical diagnosis by neuroscientists and microbiological analysis, which are time-consuming and expensive. In this study, we develop a domain enriched AI method for computerized tomography (CT)-based infection diagnosis in hydrocephalic imagery. State-of-the-art (SOTA) convolutional neural network (CNN) approaches form an attractive neural engineering solution for addressing this problem as pathogen-specific features need discovery. Yet black-box deep networks often need unrealistic abundant training data and are not easily interpreted.

          Approach.

          In this paper, a novel brain attention regularizer is proposed, which encourages the CNN to put more focus inside brain regions in its feature extraction and decision making. Our approach is then extended to a hybrid 2D/3D network that mines inter-slice information. A new strategy of regularization is also designed for enabling collaboration between 2D and 3D branches.

          Main results.

          Our proposed method achieves SOTA results on a CURE Children’s Hospital of Uganda dataset with an accuracy of 95.8% in hydrocephalus classification and 84% in pathogen classification. Statistical analysis is performed to demonstrate that our proposed methods obtain significant improvements over the existing SOTA alternatives.

          Significance.

          Such attention regularized learning has particularly pronounced benefits in regimes where training data may be limited, thereby enhancing generalizability. To the best of our knowledge, our findings are unique among early efforts in interpretable AI-based models for classification of hydrocephalus and underlying pathogen using CT scans.

          Related collections

          Most cited references37

          • Record: found
          • Abstract: found
          • Article: not found

          Adam: A Method for Stochastic Optimization

          We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Squeeze-and-Excitation Networks

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              PyTorch: An Imperative Style, High-Performance Deep Learning Library

              Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks. 12 pages, 3 figures, NeurIPS 2019
                Bookmark

                Author and article information

                Journal
                101217933
                32339
                J Neural Eng
                J Neural Eng
                Journal of neural engineering
                1741-2560
                1741-2552
                10 May 2024
                16 June 2023
                16 June 2023
                16 June 2024
                : 20
                : 3
                : 10.1088/1741-2552/acd9ee
                Affiliations
                [1 ]Department of Electrical Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
                [2 ]Center for Neural Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
                [3 ]College of Medicine, the Pennsylvania State University, University Park, PA 16801, United States of America
                [4 ]CURE Children’s Hospital of Uganda, Mbale, Uganda
                [5 ]Department of Neurosurgery, Yale University, New Haven, CT 06510, United States of America
                Author notes
                [* ]Author to whom any correspondence should be addressed. mvy5241@ 123456psu.edu
                Author information
                http://orcid.org/0000-0002-1812-6571
                http://orcid.org/0000-0003-2623-3614
                Article
                NIHMS1992387
                10.1088/1741-2552/acd9ee
                11099590
                37253355
                31743aae-a9d1-478c-b9ba-94f3cf89a0ed

                Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.

                History
                Categories
                Article

                machine learning,neural network,attention learning,hydrocephalus,ct,treatment planning,skull stripping

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content107

                Most referenced authors841