24
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Artificial Intelligence for Automatic Pain Assessment: Research Methods and Perspectives

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Although proper pain evaluation is mandatory for establishing the appropriate therapy, self-reported pain level assessment has several limitations. Data-driven artificial intelligence (AI) methods can be employed for research on automatic pain assessment (APA). The goal is the development of objective, standardized, and generalizable instruments useful for pain assessment in different clinical contexts. The purpose of this article is to discuss the state of the art of research and perspectives on APA applications in both research and clinical scenarios. Principles of AI functioning will be addressed. For narrative purposes, AI-based methods are grouped into behavioral-based approaches and neurophysiology-based pain detection methods. Since pain is generally accompanied by spontaneous facial behaviors, several approaches for APA are based on image classification and feature extraction. Language features through natural language strategies, body postures, and respiratory-derived elements are other investigated behavioral-based approaches. Neurophysiology-based pain detection is obtained through electroencephalography, electromyography, electrodermal activity, and other biosignals. Recent approaches involve multimode strategies by combining behaviors with neurophysiological findings. Concerning methods, early studies were conducted by machine learning algorithms such as support vector machine, decision tree, and random forest classifiers. More recently, artificial neural networks such as convolutional and recurrent neural network algorithms are implemented, even in combination. Collaboration programs involving clinicians and computer scientists must be aimed at structuring and processing robust datasets that can be used in various settings, from acute to different chronic pain conditions. Finally, it is crucial to apply the concepts of explainability and ethics when examining AI applications for pain research and management.

          Related collections

          Most cited references104

          • Record: found
          • Abstract: not found
          • Article: not found

          Heart rate variability as an index of regulated emotional responding.

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

              Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
                Bookmark

                Author and article information

                Contributors
                Journal
                Pain Res Manag
                Pain Res Manag
                PRM
                Pain Research & Management
                Hindawi
                1203-6765
                1918-1523
                2023
                28 June 2023
                : 2023
                : 6018736
                Affiliations
                1Division of Anesthesia and Pain Medicine, Istituto Nazionale Tumori IRCCS Fondazione G. Pascale, Naples 80131, Italy
                2SSD-Innovative Therapies for Abdominal Metastases, Istituto Nazionale Tumori di Napoli IRCCS “G. Pascale”, Via M. Semmola, Naples 80131, Italy
                3Head and Neck Oncology Unit, Istituto Nazionale Tumori IRCCS-Fondazione “G. Pascale”, Naples 80131, Italy
                4Dieti Department, University of Naples, Naples, Italy
                5Division of Hepatobiliary Surgical Oncology, Istituto Nazionale Tumori IRCCS, Fondazione Pascale-IRCCS di Napoli, Naples, Italy
                6Department of Pharmacology, Faculty of Medicine and Psychology, University Sapienza of Rome, Rome, Italy
                7Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
                8Department of Anesthesia and Critical Care, ARCO ROMA, Ospedale Pediatrico Bambino Gesù IRCCS, Rome 00165, Italy
                9Department of Electrical Engineering and Information Technologies, University of Naples “Federico II”, Naples 80100, Italy
                Author notes

                Academic Editor: Li Hu

                Author information
                https://orcid.org/0000-0002-5236-3132
                https://orcid.org/0000-0002-2377-3765
                Article
                10.1155/2023/6018736
                10322534
                37416623
                a276c4ae-ccdd-4169-9c07-2b514b9d5261
                Copyright © 2023 Marco Cascella et al.

                This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 22 November 2022
                : 3 February 2023
                : 20 April 2023
                Categories
                Review Article

                Comments

                Comment on this article