Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      GJFocuser: a Gaussian difference and joint learning-based autofocus method for whole slide imaging

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Whole slide imaging (WSI) provides tissue visualization at the cellular level, thereby enhancing the effectiveness of computer-aided diagnostic systems. High-precision autofocusing methods are essential for ensuring the quality of WSI. However, the accuracy of existing autofocusing techniques can be notably affected by variations in staining and sample heterogeneity, particularly without the addition of extra hardware. This study proposes a robust autofocusing method based on the difference between Gaussians (DoG) and joint learning. The DoG emphasizes image edge information that is closely related to focal distance, thereby mitigating the influence of staining variations. The joint learning framework constrains the network’s sensitivity to defocus distance, effectively addressing the impact of the differences in sample morphology. We first conduct comparative experiments on public datasets against state-of-the-art methods, with results indicating that our approach achieves cutting-edge performance. Subsequently, we apply this method in a low-cost digital microscopy system, showcasing its effectiveness and versatility in practical scenarios.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: found
          • Article: not found

          Fluorophore localization algorithms for super-resolution microscopy.

          Super-resolution localization microscopy methods provide powerful new capabilities for probing biology at the nanometer scale via fluorescence. These methods rely on two key innovations: switchable fluorophores (which blink on and off and can be sequentially imaged) and powerful localization algorithms (which estimate the positions of the fluorophores in the images). These techniques have spurred a flurry of innovation in algorithm development over the last several years. In this Review, we survey the fundamental issues for single-fluorophore fitting routines, localization algorithms based on principles other than fitting, three-dimensional imaging, dipole imaging and techniques for estimating fluorophore positions from images of multiple activated fluorophores. We offer practical advice for users and adopters of algorithms, and we identify areas for further development.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Searching for MobileNetV3

            We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. ICCV 2019
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms

                Bookmark

                Author and article information

                Journal
                Biomed Opt Express
                Biomed Opt Express
                BOE
                Biomedical Optics Express
                Optica Publishing Group
                2156-7085
                23 December 2024
                01 January 2025
                : 16
                : 1
                : 282-302
                Affiliations
                [1 ]School of Computer Science and Technology, Hainan University , Haikou 570228, China
                [2 ]Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University , Sanya 570228, China
                Author notes
                Author information
                https://orcid.org/0000-0003-2400-966X
                https://orcid.org/0000-0002-9366-6586
                Article
                547119
                10.1364/BOE.547119
                11729290
                39816138
                8d71a732-db34-499c-b0a8-aa4850d69622
                © 2024 Optica Publishing Group

                https://doi.org/10.1364/OA_License_v2#VOR-OA

                © 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

                History
                : 05 November 2024
                : 13 December 2024
                : 17 December 2024
                Funding
                Funded by: National Natural Science Foundation of China 10.13039/501100001809
                Award ID: 82160345
                Funded by: Major Science and Technology Project of Hainan Province 10.13039/501100013072
                Award ID: ZDKJ2021016
                Funded by: National Key Research and Development Program of China 10.13039/501100012166
                Award ID: 2021YFC2600600
                Categories
                Article

                Vision sciences
                Vision sciences

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content248

                Most referenced authors269