20
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      DeepFocus: Detection of out-of-focus regions in whole slide digital images using deep learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The development of whole slide scanners has revolutionized the field of digital pathology. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Moreover, these artifacts hamper the performance of computerized image analysis systems. These areas are typically identified by visual inspection, which leads to a subjective evaluation causing high intra- and inter-observer variability. Moreover, this process is both tedious, and time-consuming. The aim of this study is to develop a deep learning based software called, DeepFocus, which can automatically detect and segment blurry areas in digital whole slide images to address these problems. DeepFocus is built on TensorFlow, an open source library that exploits data flow graphs for efficient numerical computation. DeepFocus was trained by using 16 different H&E and IHC-stained slides that were systematically scanned on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. When trained and tested on two independent datasets, DeepFocus resulted in an average accuracy of 93.2% (± 9.6%), which is a 23.8% improvement over an existing method. DeepFocus has the potential to be integrated with whole slide scanners to automatically re-scan problematic areas, hence improving the overall image quality for pathologists and image analysis algorithms.

          Related collections

          Most cited references15

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

          Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images.

            Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Methods for nuclei detection, segmentation, and classification in digital histopathology: a review-current status and future potential.

              Digital pathology represents one of the major evolutions in modern medicine. Pathological examinations constitute the gold standard in many medical protocols, and also play a critical and legal role in the diagnosis process. In the conventional cancer diagnosis, pathologists analyze biopsies to make diagnostic and prognostic assessments, mainly based on the cell morphology and architecture distribution. Recently, computerized methods have been rapidly evolving in the area of digital pathology, with growing applications related to nuclei detection, segmentation, and classification. In cancer research, these approaches have played, and will continue to play a key (often bottleneck) role in minimizing human intervention, consolidating pertinent second opinions, and providing traceable clinical information. Pathological studies have been conducted for numerous cancer detection and grading applications, including brain, breast, cervix, lung, and prostate cancer grading. Our study presents, discusses, and extracts the major trends from an exhaustive overview of various nuclei detection, segmentation, feature computation, and classification techniques used in histopathology imagery, specifically in hematoxylin-eosin and immunohistochemical staining protocols. This study also enables us to measure the challenges that remain, in order to reach robust analysis of whole slide images, essential high content imaging with diagnostic biomarkers and prognosis support in digital pathology.
                Bookmark

                Author and article information

                Contributors
                Role: Data curationRole: Formal analysisRole: MethodologyRole: SoftwareRole: ValidationRole: Writing – original draftRole: Writing – review & editing
                Role: MethodologyRole: Writing – original draftRole: Writing – review & editing
                Role: Writing – review & editing
                Role: ConceptualizationRole: Formal analysisRole: Funding acquisitionRole: MethodologyRole: Project administrationRole: ResourcesRole: ValidationRole: Writing – original draftRole: Writing – review & editing
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                25 October 2018
                2018
                : 13
                : 10
                : e0205387
                Affiliations
                [1 ] Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, United States of America
                [2 ] Department of Pathology, The Ohio State University Wexner Medical, Columbus, OH, United States of America
                Taipei Medical University, TAIWAN
                Author notes

                Competing Interests: The authors have declared that no competing interests exist.

                ‡ These authors are joint senior authors on this work.

                Author information
                http://orcid.org/0000-0003-4211-4367
                Article
                PONE-D-17-34191
                10.1371/journal.pone.0205387
                6201886
                30359393
                f7adb826-a9b4-4ad7-95a6-b73d321675ca
                © 2018 Senaras et al

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 20 September 2017
                : 25 September 2018
                Page count
                Figures: 6, Tables: 5, Pages: 13
                Funding
                Funded by: funder-id http://dx.doi.org/10.13039/100000054, National Cancer Institute;
                Award ID: R01CA134451
                Award Recipient :
                Funded by: funder-id http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: U24CA199374
                Award Recipient :
                Funded by: funder-id http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: U01CA198945
                Award Recipient :
                The project described was supported in part by Awards Number R01CA134451 (PIs: Gurcan, Lozanski), U24CA199374 (PI: Gurcan), and U01CA198945 from the National Cancer Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute, or the National Institutes of Health. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article
                Engineering and Technology
                Digital Imaging
                Computer and Information Sciences
                Artificial Intelligence
                Machine Learning
                Deep Learning
                Research and Analysis Methods
                Imaging Techniques
                Image Analysis
                Physical Sciences
                Mathematics
                Applied Mathematics
                Algorithms
                Research and Analysis Methods
                Simulation and Modeling
                Algorithms
                Physical Sciences
                Physics
                Optics
                Focal Planes
                Computer and Information Sciences
                Data Acquisition
                Research and Analysis Methods
                Imaging Techniques
                Computer and Information Sciences
                Computers
                Personal Computers
                Custom metadata
                All files are available from DeepFocus image databaset (DOI: https://doi.org/10.5281/zenodo.1134848) and the source code is available from https://doi.org/10.5281/zenodo.1148821.

                Uncategorized
                Uncategorized

                Comments

                Comment on this article