14
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Robust autofocus method based on patterned active illumination and image cross-correlation analysis

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          For the effectiveness of a computer-aided diagnosis system, the quality of whole-slide image (WSI) is the foundation, and a useful autofocus method is an important part of ensuring the quality of WSI. The existing autofocus methods need to balance focusing speed and focusing accuracy, and need to be optimized separately for different samples or scenes. In this paper, a robust autofocus method based on fiber bundle illumination and image normalization analysis is proposed. For various application scenes, it meets the requirements of autofocusing through active illumination, such as bright field imaging and fluorescence imaging. For different structures on samples, it ensures the autofocusing accuracy through image analysis. The experimental results imply that the autofocusing method in this paper can effectively track the change of the distance from the sample to the focal plane and significantly improve the WSI quality.

          Related collections

          Most cited references21

          • Record: found
          • Abstract: found
          • Article: not found

          Artificial intelligence in digital pathology — new tools for diagnosis and precision oncology

          In the past decade, advances in precision oncology have resulted in an increased demand for predictive assays that enable the selection and stratification of patients for treatment. The enormous divergence of signalling and transcriptional networks mediating the crosstalk between cancer, stromal and immune cells complicates the development of functionally relevant biomarkers based on a single gene or protein. However, the result of these complex processes can be uniquely captured in the morphometric features of stained tissue specimens. The possibility of digitizing whole-slide images of tissue has led to the advent of artificial intelligence (AI) and machine learning tools in digital pathology, which enable mining of subvisual morphometric phenotypes and might, ultimately, improve patient management. In this Perspective, we critically evaluate various Al-based computational approaches for digital pathology, focusing on deep neural networks and ‘hand-crafted’ feature-based methodologies. We aim to provide a broad framework for incorporating AI and machine learning tools into clinical oncology, with an emphasis on biomarker development. We discuss some of the challenges relating to the use of AI, including the need for well-curated validation datasets, regulatory approval and fair reimbursement strategies. Finally, we present potential future opportunities for precision oncology.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Active microscope stabilization in three dimensions using image correlation

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection

              Background: Digital pathology enables remote access or consults and powerful image analysis algorithms. However, the slide digitization process can create artifacts such as out-of-focus (OOF). OOF is often only detected on careful review, potentially causing rescanning, and workflow delays. Although scan time operator screening for whole-slide OOF is feasible, manual screening for OOF affecting only parts of a slide is impractical. Methods: We developed a convolutional neural network (ConvFocus) to exhaustively localize and quantify the severity of OOF regions on digitized slides. ConvFocus was developed using our refined semi-synthetic OOF data generation process and evaluated using seven slides spanning three different tissue and three different stain types, each of which were digitized using two different whole-slide scanner models ConvFocus's predictions were compared with pathologist-annotated focus quality grades across 514 distinct regions representing 37,700 35 μm × 35 μm image patches, and 21 digitized “z-stack” WSIs that contain known OOF patterns. Results: When compared to pathologist-graded focus quality, ConvFocus achieved Spearman rank coefficients of 0.81 and 0.94 on two scanners and reproduced the expected OOF patterns from z-stack scanning. We also evaluated the impact of OOF on the accuracy of a state-of-the-art metastatic breast cancer detector and saw a consistent decrease in performance with increasing OOF. Conclusions: Comprehensive whole-slide OOF categorization could enable rescans before pathologist review, potentially reducing the impact of digitization focus issues on the clinical workflow. We show that the algorithm trained on our semi-synthetic OOF data generalizes well to real OOF regions across tissue types, stains, and scanners. Finally, quantitative OOF maps can flag regions that might otherwise be misclassified by image analysis algorithms, preventing OOF-induced errors.
                Bookmark

                Author and article information

                Journal
                Biomed Opt Express
                Biomed Opt Express
                BOE
                Biomedical Optics Express
                Optica Publishing Group
                2156-7085
                29 March 2024
                01 April 2024
                : 15
                : 4
                : 2697-2707
                Affiliations
                [1 ]Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University , Sanya 570228, China
                [2 ] yjzhang@ 123456hainanu.edu.cn
                [3 ] Huang2020@ 123456hainanu.edu.cn
                Author information
                https://orcid.org/0000-0001-5703-3610
                https://orcid.org/0000-0003-2400-966X
                Article
                520514
                10.1364/BOE.520514
                11019692
                38633067
                f6c1677f-070e-4c73-97ac-a5b3957c0a97
                © 2024 Optica Publishing Group

                https://doi.org/10.1364/OA_License_v2#VOR-OA

                © 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

                History
                : 31 January 2024
                : 09 March 2024
                : 22 March 2024
                Funding
                Funded by: National Key Research and Development Program of China 10.13039/501100012166
                Award ID: 2022YFC3400601
                Funded by: National Natural Science Foundation of China 10.13039/501100001809
                Award ID: 82260368
                Funded by: Hainan Province Science and Technology Special Fund
                Award ID: ZDYF2022SHFZ079
                Award ID: ZDYF2022SHFZ126
                Funded by: Start-up Fund from Hainan University
                Award ID: KYQD(ZR)-20077
                Categories
                Article

                Vision sciences
                Vision sciences

                Comments

                Comment on this article