9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Book Chapter: not found
      Advances in Explainable AI Applications for Smart Cities : 

      Explainable AI for Cybersecurity

      edited-book

      Read this book at

      Buy book Bookmark
          There is no author summary for this book yet. Authors can add summaries to their books on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, the utilization of AI in the field of cybersecurity has become more widespread. Black-box AI models pose a significant challenge in terms of interpretability and transparency, which is one of the major drawbacks of AI-based systems. This chapter explores explainable AI (XAI) techniques as a solution to these challenges and discusses their application in cybersecurity. The chapter begins with an explanation of AI in cybersecurity, including the types of AI commonly utilized, such as DL, ML, and NLP, and their applications in cybersecurity, such as intrusion detection, malware analysis, and vulnerability assessment. The chapter then highlights the challenges with black-box AI, including difficulty identifying and resolving errors, the lack of transparency, and the inability to understand the decision-making process. The chapter then delves into XAI techniques for cybersecurity solutions, including interpretable machine-learning models, rule-based systems, and model explanation techniques.

          Related collections

          Most cited references194

          • Record: found
          • Abstract: found
          • Article: not found

          Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

          Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

                Bookmark

                Author and book information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Book Chapter
                January 18 2024
                : 31-97
                10.4018/978-1-6684-6361-1.ch002
                2d0cb059-d5f5-4b33-831a-13134d08d115
                History

                Comments

                Comment on this book

                Book chapters

                Similar content370

                Cited by2