26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks

      research-article
      1 , 2 , 2 , 2 , 3 , 1 , 4 , 5 , 6 , 2 , 3 , 7 , 4 , 1 , 1 , 1 , 1 , 8 , 1 , 1 , 1 , 1 , 1 , 5 , 4 , 4 , 9 , 5 , 10 , 5 , 5 , 1 , 6 , 6 , 11 , 12 , 13 , 1 , 14 , *
      Nature medicine

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Intraoperative diagnosis is essential for providing safe and effective care during cancer surgery 1 . The existing workflow for intraoperative diagnosis based on hematoxylin and eosin-staining of processed tissue is time-, resource-, and labor-intensive 2, 3 . Moreover, interpretation of intraoperative histologic images is dependent on a contracting, unevenly distributed pathology workforce 4 . Here, we report a parallel workflow that combines stimulated Raman histology (SRH) 57 , a label-free optical imaging method, and deep convolutional neural networks (CNN) to predict diagnosis at the bedside in near real-time in an automated fashion. Specifically, our CNN, trained on over 2.5 million SRH images, predicts brain tumor diagnosis in the operating room in under 150 seconds, an order of magnitude faster than conventional techniques (e.g., 20–30 minutes) 2 . In a multicenter, prospective clinical trial (n = 278) we demonstrated that CNN-based diagnosis of SRH images was non-inferior to pathologist-based interpretation of conventional histologic images (overall accuracy, 94.6% vs. 93.9%). Our CNN learned a hierarchy of recognizable histologic feature representations to classify the major histopathologic classes of brain tumors. Additionally, we implemented a semantic segmentation method to identify tumor infiltrated, diagnostic regions within SRH images. These results demonstrate how intraoperative cancer diagnosis can be streamlined, creating a complimentary pathway for tissue diagnosis that is independent of a traditional pathology laboratory.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: found
          • Article: not found

          Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning

          Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them-STK11, EGFR, FAT1, SETBP1, KRAS and TP53-can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. Our approach can be applied to any cancer type, and the code is available at https://github.com/ncoudray/DeepPATH .
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

            Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question: Are there any benefits to combining Inception architectures with residual connections? Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4 networks, we achieve 3.08% top-5 error on the test set of the ImageNet classification (CLS) challenge.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Label-free biomedical imaging with high sensitivity by stimulated Raman scattering microscopy.

              Label-free chemical contrast is highly desirable in biomedical imaging. Spontaneous Raman microscopy provides specific vibrational signatures of chemical bonds, but is often hindered by low sensitivity. Here we report a three-dimensional multiphoton vibrational imaging technique based on stimulated Raman scattering (SRS). The sensitivity of SRS imaging is significantly greater than that of spontaneous Raman microscopy, which is achieved by implementing high-frequency (megahertz) phase-sensitive detection. SRS microscopy has a major advantage over previous coherent Raman techniques in that it offers background-free and readily interpretable chemical contrast. We show a variety of biomedical applications, such as differentiating distributions of omega-3 fatty acids and saturated lipids in living cells, imaging of brain and skin tissues based on intrinsic lipid contrast, and monitoring drug delivery through the epidermis.
                Bookmark

                Author and article information

                Journal
                9502015
                8791
                Nat Med
                Nat. Med.
                Nature medicine
                1078-8956
                1546-170X
                25 November 2019
                06 January 2020
                January 2020
                06 July 2020
                : 26
                : 1
                : 52-58
                Affiliations
                [1 ]Department of Neurosurgery, University of Michigan, Ann Arbor, Michigan, USA
                [2 ]School of Medicine, University of Michigan, Ann Arbor, Michigan, USA
                [3 ]College of Physicians and Surgeons, Columbia University, New York, New York, USA
                [4 ]Department of Neurological Surgery, University of Miami, Miami, Florida, USA
                [5 ]Department of Neurological Surgery, Columbia University, New York, New York, USA
                [6 ]Invenio Imaging, Inc., Santa Clara, California, USA
                [7 ]Department of Pediatrics Oncology, Columbia University, New York, New York, USA
                [8 ]Department of Otolaryngology, University of Michigan, Ann Arbor, Michigan, USA
                [9 ]Department of Pathology, New York University, New York, New York, USA
                [10 ]Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, Michigan, USA
                [11 ]Department of Pathology & Cell Biology, Columbia University, New York, New York, USA
                [12 ]Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, USA
                [13 ]Department of Pathology, University of Michigan, Ann Arbor, Michigan, USA
                [14 ]Department of Neurosurgery, New York University, New York, New York, USA
                Author notes
                [‡]

                Present address: Department of Neurological Surgery, University of California San Francisco, San Francisco, California, USA

                Author contributions: T.C.H., S.C.-P., and D.A.O. conceived the study, designed the experiments, and wrote the paper, and were assisted by B.P., H.L., A.R.A., E.U., Z.U.F., S.L., P.D.P., T.M., M.S., P.C., and S.S.S.K. Authors C.W.F. and J.T. built the SRH microscope. T.C.H., A.R.A., E.U., A.V.S., T.D.J., P.C., and A.H.S. analyzed the data. T.D.J. and T.C.H. performed statistical analyses. D.A.O., S.L.H.-J., H.J.L.G., J.A.H., C.O.M., E.L.M., S.E.S., P.G.P., M.B.S., J.N.B., M.L.O., B.G.T., K.M.M., R.S.D., O.S., D.G.E., R.J.K., M.E.I., and G.M.M. provided surgical specimens for imaging. All authors reviewed and edited the manuscript.

                [* ]Corresponding Author: Daniel A. Orringer, MD, New York University, 530 First Ave., SKI 8S, New York, NY 10016; phone 212-263-0904, Daniel.Orringer@ 123456nyulangone.org
                Article
                NIHMS1544390
                10.1038/s41591-019-0715-9
                6960329
                31907460
                2bdd3dd1-75d3-4497-9197-ef1f2469cf43

                Users may view, print, copy, and download text and data-mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use: http://www.nature.com/authors/editorial_policies/license.html#terms

                History
                Categories
                Article

                Medicine
                Medicine

                Comments

                Comment on this article