2
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Artificial intelligence for Sustainable Development Goals: Bibliometric patterns and concept evolution trajectories

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The development of artificial intelligence (AI) as a field has impacted almost all aspects of human life. More recently it has found a role in addressing developmental challenges, specifically the Sustainable Development Goals (SDGs). However, there are not enough systematic studies on analysis of the role of AI research towards the SDGs. Therefore, this article attempts to bridge this gap by identifying the major bibliometric trends and concept‐evolution trajectories in the area of AI applications for sustainable‐development goals. The research publication data for the last 20 years in the areas of artificial intelligence, machine learning, deep learning, and so forth, is obtained and computationally analysed using a framework comprising bibliometrics, path analysis and content analysis. The findings show an incremental trend in overall publications on the application of AI for SDGs across the different regions of the world. SDGs 3 (good health & well‐being) and 7 (affordable and clean energy) are found as the areas with the most applications of AI. In SDG3, the literature reflects application of AI techniques such as deep learning for precision and personalised medicine while in SDG7, a number of studies have employed AI techniques for the integration of systems for efficient generation of solar power and improving the energy efficiency of a building. Furthermore, SDG 4 (quality education), SDG 13 (climate action), SDG 11 (sustainable cities and communities) and SDG 16 (peace, justice and strong institutions) are the other SDGs where AI approaches and techniques are applied. The analytical results present a detailed insight of application of AI for achieving the SDGs.

          Related collections

          Most cited references61

          • Record: found
          • Abstract: found
          • Article: not found

          Computational Radiomics System to Decode the Radiographic Phenotype

          Radiomics aims to quantify phenotypic characteristics on medical imaging through the use of automated algorithms. Radiomic artificial intelligence (AI) technology, either based on engineered hard-coded algorithms or deep learning methods, can be used to develop non-invasive imaging-based biomarkers. However, lack of standardized algorithm definitions and image processing severely hampers reproducibility and comparability of results. To address this issue, we developed PyRadiomics , a flexible open-source platform capable of extracting a large panel of engineered features from medical images. PyRadiomics is implemented in Python and can be used standalone or using 3D-Slicer. Here, we discuss the workflow and architecture of PyRadiomics and demonstrate its application in characterizing lung-lesions. Source code, documentation, and examples are publicly available at www.radiomics.io . With this platform, we aim to establish a reference standard for radiomic analyses, provide a tested and maintained resource, and to grow the community of radiomic developers addressing critical needs in cancer research.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Radiomics: the bridge between medical imaging and personalized medicine

            Radiomics, the high-throughput mining of quantitative image features from standard-of-care medical imaging that enables data to be extracted and applied within clinical-decision support systems to improve diagnostic, prognostic, and predictive accuracy, is gaining importance in cancer research. Radiomic analysis exploits sophisticated image analysis tools and the rapid development and validation of medical imaging data that uses image-based signatures for precision diagnosis and treatment, providing a powerful tool in modern medicine. Herein, we describe the process of radiomics, its pitfalls, challenges, opportunities, and its capacity to improve clinical decision making, emphasizing the utility for patients with cancer. Currently, the field of radiomics lacks standardized evaluation of both the scientific integrity and the clinical relevance of the numerous published radiomics investigations resulting from the rapid growth of this area. Rigorous evaluation criteria and reporting guidelines need to be established in order for radiomics to mature as a discipline. Herein, we provide guidance for investigations to meet this urgent need in the field of radiomics.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

              Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Sustainable Development
                Sustainable Development
                Wiley
                0968-0802
                1099-1719
                July 30 2023
                Affiliations
                [1 ] Department of Computer Science Banaras Hindu University Varanasi India
                [2 ] FLOW, Engineering Mechanics KTH Royal Institute of Technology Stockholm Sweden
                Article
                10.1002/sd.2706
                2055c1a9-57ea-4994-aeb7-079b809b8e45
                © 2023

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article