23
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network

      ,
      Journal of Digital Imaging
      Springer Nature

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          <p class="first" id="Par1">Low-dose computed tomography (LDCT) has offered tremendous benefits in radiation-restricted applications, but the quantum noise as resulted by the insufficient number of photons could potentially harm the diagnostic performance. Current image-based denoising methods tend to produce a blur effect on the final reconstructed results especially in high noise levels. In this paper, a deep learning-based approach was proposed to mitigate this problem. An adversarially trained network and a sharpness detection network were trained to guide the training process. Experiments on both simulated and real dataset show that the results of the proposed method have very small resolution loss and achieves better performance relative to state-of-the-art methods both quantitatively and visually. </p>

          Related collections

          Most cited references36

          • Record: found
          • Abstract: not found
          • Article: not found

          $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Show and tell: A neural image caption generator

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Perceptual Losses for Real-Time Style Transfer and Super-Resolution

              We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.
                Bookmark

                Author and article information

                Journal
                Journal of Digital Imaging
                J Digit Imaging
                Springer Nature
                0897-1889
                1618-727X
                October 2018
                February 20 2018
                October 2018
                : 31
                : 5
                : 655-669
                Article
                10.1007/s10278-018-0056-0
                6148809
                29464432
                9152e424-4ba8-4118-b576-6004b7a15e5a
                © 2018

                http://www.springer.com/tdm

                History

                Comments

                Comment on this article