1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Risks and benefits associated with the primary functions of artificial intelligence powered autoinjectors

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objectives

          This research aims to present and assess the Primary Functions of autoinjectors introduced in ISO 11608-1:2022. Investigate the risks in current autoinjector technology, identify and assess risks and benefits associated with Artificial Intelligence (AI) powered autoinjectors, and propose a framework for mitigating these risks. ISO 11608-1:2022 is a standard that specifies requirements and test methods for needle-based injection systems intended to deliver drugs, focusing on design and function to ensure patient safety and product effectiveness. ‘KZH’ is an FDA product code used to classify autoinjectors, for regulatory purposes, ensuring they meet defined safety and efficacy standards before being marketed.

          Method

          A comprehensive analysis of autoinjectors problems is conducted using data from the United States Food and Drug Administration (FDA) database. This database records medical device reporting events, including those related to autoinjectors, reported by various sources. The analysis focuses on events associated with the product code KZH, covering data from January 1, 2008, to September 30, 2023. This research employs statistical frequency analysis and incorporates pertinent the FDA, United Kingdom, European Commission regulations, and ISO standards.

          Results

          500 medical device reporting events are assessed for autoinjectors under the KZH code. Ultimately, 188 of these events are confirmed to be associated with autoinjectors, all 500 medical devices were seen to lack AI capabilities. An analysis of these events for traditional mechanical autoinjectors revealed a predominant occurrence of malfunctions (72%) and injuries (26%) among event types. Device problems, such as breakage, defects, jams, and others, accounted for 45% of incidents, while 10% are attributed to patient problems, particularly missed and underdoses.

          Conclusion

          Traditional autoinjectors are designed to assist patients in medication administration, underscoring the need for quality control, reliability, and design enhancements. AI autoinjectors, sharing this goal, bring additional cybersecurity and software risks, requiring a comprehensive risk management framework that includes standards, tools, training, and ongoing monitoring. The integration of AI promises to improve functionality, enable real-time monitoring, and facilitate remote clinical trials, timely interventions, and tailored medical treatments.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: found
          • Article: not found
          Is Open Access

          Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis

          There has been a surge of interest in artificial intelligence and machine learning (AI/ML)-based medical devices. However, it is poorly understood how and which AI/ML-based medical devices have been approved in the USA and Europe. We searched governmental and non-governmental databases to identify 222 devices approved in the USA and 240 devices in Europe. The number of approved AI/ML-based devices has increased substantially since 2015, with many being approved for use in radiology. However, few were qualified as high-risk devices. Of the 124 AI/ML-based devices commonly approved in the USA and Europe, 80 were first approved in Europe. One possible reason for approval in Europe before the USA might be the potentially relatively less rigorous evaluation of medical devices in Europe. The substantial number of approved devices highlight the need to ensure rigorous regulation of these devices. Currently, there is no specific regulatory pathway for AI/ML-based medical devices in the USA or Europe. We recommend more transparency on how devices are regulated and approved to enable and improve public trust, efficacy, safety, and quality of AI/ML-based medical devices. A comprehensive, publicly accessible database with device details for Conformité Européene (CE)-marked medical devices in Europe and US Food and Drug Administration approved devices is needed.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Role of Artificial Intelligence in Patient Safety Outcomes: Systematic Literature Review

            Background Artificial intelligence (AI) provides opportunities to identify the health risks of patients and thus influence patient safety outcomes. Objective The purpose of this systematic literature review was to identify and analyze quantitative studies utilizing or integrating AI to address and report clinical-level patient safety outcomes. Methods We restricted our search to the PubMed, PubMed Central, and Web of Science databases to retrieve research articles published in English between January 2009 and August 2019. We focused on quantitative studies that reported positive, negative, or intermediate changes in patient safety outcomes using AI apps, specifically those based on machine-learning algorithms and natural language processing. Quantitative studies reporting only AI performance but not its influence on patient safety outcomes were excluded from further review. Results We identified 53 eligible studies, which were summarized concerning their patient safety subcategories, the most frequently used AI, and reported performance metrics. Recognized safety subcategories were clinical alarms (n=9; mainly based on decision tree models), clinical reports (n=21; based on support vector machine models), and drug safety (n=23; mainly based on decision tree models). Analysis of these 53 studies also identified two essential findings: (1) the lack of a standardized benchmark and (2) heterogeneity in AI reporting. Conclusions This systematic review indicates that AI-enabled decision support systems, when implemented correctly, can aid in enhancing patient safety by improving error detection, patient stratification, and drug management. Future work is still needed for robust validation of these systems in prospective and real-world clinical environments to understand how well AI can predict safety outcomes in health care settings.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Connected healthcare: Improving patient care using digital health technologies

                Bookmark

                Author and article information

                Contributors
                URI : https://loop.frontiersin.org/people/1555141/overviewRole: Role: Role: Role: Role: Role: Role: Role: Role: Role: Role: Role:
                Journal
                Front Med Technol
                Front Med Technol
                Front. Med. Technol.
                Frontiers in Medical Technology
                Frontiers Media S.A.
                2673-3129
                05 April 2024
                2024
                : 6
                : 1331058
                Affiliations
                Faculty of Medicine and Health Technology, Tampere University , Tampere, Finland
                Author notes

                Edited by: Sanyam Gandhi, Takeda Development Centers Americas, United States

                Reviewed by: T. Ted Song, University of Washington, United States

                Sharyn O'Halloran, Columbia University, United States

                [* ] Correspondence: Marlon Luca Machal marlon.machal@ 123456tuni.fi
                Article
                10.3389/fmedt.2024.1331058
                11026574
                38645777
                f9da20d6-6826-4ce4-af90-10f5ed00bfcc
                © 2024 Machal.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 31 October 2023
                : 20 March 2024
                Page count
                Figures: 4, Tables: 1, Equations: 0, References: 55, Pages: 0, Words: 0
                Funding
                The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
                Categories
                Medical Technology
                Original Research
                Custom metadata
                Regulatory Affairs

                artificial intelligence,primary functions,autoinjectors,risks,cybersecurity

                Comments

                Comment on this article