0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Developing and certifying safe - or so-called trustworthy - AI has become an increasingly salient issue, especially in light of upcoming regulation such as the EU AI Act. In this context, the black-box nature of machine learning models limits the use of conventional avenues of approach towards certifying complex technical systems. As a potential solution, methods to give insights into this black-box - devised in the field of eXplainable AI (XAI) - could be used. In this study, the potential and shortcomings of such methods for the purpose of safe AI development and certification are discussed in 15 qualitative interviews with experts out of the areas of (X)AI and certification. We find that XAI methods can be a helpful asset for safe AI development, as they can show biases and failures of ML-models, but since certification relies on comprehensive and correct information about technical systems, their impact is expected to be limited.

          Related collections

          Author and article information

          Journal
          22 July 2024
          Article
          2408.02379
          767f1662-afb6-43e3-88e1-66af153c0cc6

          http://creativecommons.org/licenses/by/4.0/

          History
          Custom metadata
          cs.CY cs.AI

          Applied computer science,Artificial intelligence
          Applied computer science, Artificial intelligence

          Comments

          Comment on this article