1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Using Artificial Neural Networks to Predict Intra-Abdominal Abscess Risk Post-Appendectomy

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objective:

          To determine if artificial neural networks (ANN) could predict the risk of intra-abdominal abscess (IAA) development post-appendectomy.

          Background:

          IAA formation occurs in 13.6% to 14.6% of appendicitis cases with “complicated” appendicitis as the most common cause of IAA. There remains inconsistency in describing the severity of appendicitis with variation in treatment with respect to perforated appendicitis.

          Methods:

          Two “reproducible” ANN with different architectures were developed on demographic, clinical, and surgical information from a retrospective surgical dataset of 1574 patients less than 19 years old classified as either negative (n = 1,328) or positive (n = 246) for IAA post-appendectomy for appendicitis. Of 34 independent variables initially, 12 variables with the highest influence on the outcome selected for the final dataset for ANN model training and testing.

          Results:

          A total of 1574 patients were used for training and test sets (80%/20% split). Model 1 achieved accuracy of 89.84%, sensitivity of 70%, and specificity of 93.61% on the test set. Model 2 achieved accuracy of 84.13%, sensitivity of 81.63%, and specificity of 84.6%.

          Conclusions:

          ANN applied to selected variables can accurately predict patients who will have IAA post-appendectomy. Our reproducible and explainable ANNs potentially represent a state-of-the-art method for optimizing post-appendectomy care.

          Related collections

          Most cited references41

          • Record: found
          • Abstract: found
          • Article: not found

          Multiple imputation by chained equations: what is it and how does it work?

          Multivariate imputation by chained equations (MICE) has emerged as a principled method of dealing with missing data. Despite properties that make MICE particularly useful for large imputation procedures and advances in software development that now make it accessible to many researchers, many psychiatric researchers have not been trained in these methods and few practical resources exist to guide researchers in the implementation of this technique. This paper provides an introduction to the MICE method with a focus on practical aspects and challenges in using this method. A brief review of software programs available to implement MICE and then analyze multiply imputed data is also provided.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

            Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              SMOTE for high-dimensional class-imbalanced data

              Background Classification using class-imbalanced data is biased in favor of the majority class. The bias is even larger for high-dimensional data, where the number of variables greatly exceeds the number of samples. The problem can be attenuated by undersampling or oversampling, which produce class-balanced data. Generally undersampling is helpful, while random oversampling is not. Synthetic Minority Oversampling TEchnique (SMOTE) is a very popular oversampling method that was proposed to improve random oversampling but its behavior on high-dimensional data has not been thoroughly investigated. In this paper we investigate the properties of SMOTE from a theoretical and empirical point of view, using simulated and real high-dimensional data. Results While in most cases SMOTE seems beneficial with low-dimensional data, it does not attenuate the bias towards the classification in the majority class for most classifiers when data are high-dimensional, and it is less effective than random undersampling. SMOTE is beneficial for k-NN classifiers for high-dimensional data if the number of variables is reduced performing some type of variable selection; we explain why, otherwise, the k-NN classification is biased towards the minority class. Furthermore, we show that on high-dimensional data SMOTE does not change the class-specific mean values while it decreases the data variability and it introduces correlation between samples. We explain how our findings impact the class-prediction for high-dimensional data. Conclusions In practice, in the high-dimensional setting only k-NN classifiers based on the Euclidean distance seem to benefit substantially from the use of SMOTE, provided that variable selection is performed before using SMOTE; the benefit is larger if more neighbors are used. SMOTE for k-NN without variable selection should not be used, because it strongly biases the classification towards the minority class.
                Bookmark

                Author and article information

                Journal
                Ann Surg Open
                Ann Surg Open
                AS9
                Annals of Surgery Open
                Wolters Kluwer Health, Inc. (Two Commerce Square, 2001 Market Street, Philadelphia, PA 19103 )
                2691-3593
                June 2022
                23 May 2022
                : 3
                : 2
                : e168
                Affiliations
                From the [* ]Division of Infectious Diseases, Department of Pediatrics, UTHealth Houston McGovern Medical School, Houston, TX
                []Division of General and Thoracic Pediatric Surgery, Department of Pediatric Surgery, UTHealth Houston McGovern Medical School, Houston, TX.
                Author notes
                Reprints: Morouge M. Alramadhan, MD, Division of Infectious Diseases, Department of Pediatrics, UTHealth Houston McGovern Medical School, 6431 Fannin St., MSB 3.126, Houston, TX 77030. Email: morouge.m.alramadhan@ 123456uth.tmc.edu .
                Article
                00019
                10.1097/AS9.0000000000000168
                10431380
                37601615
                bc17c6bc-1449-4aea-98a9-6f20963b1ffa
                Copyright © 2022 The Author(s). Published by Wolters Kluwer Health, Inc.

                This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

                History
                : 27 September 2021
                : 18 April 2022
                Categories
                Original Study
                Custom metadata
                TRUE

                artificial intelligence,intraabdominal abscess,pediatric

                Comments

                Comment on this article