Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      From picture to 3D hologram: end-to-end learning of real-time 3D photorealistic hologram generation from 2D image input

      , , , , , , ,
      Optics Letters
      Optica Publishing Group

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this Letter, we demonstrate a deep-learning-based method capable of synthesizing a photorealistic 3D hologram in real-time directly from the input of a single 2D image. We design a fully automatic pipeline to create large-scale datasets by converting any collection of real-life images into pairs of 2D images and corresponding 3D holograms and train our convolutional neural network (CNN) end-to-end in a supervised way. Our method is extremely computation-efficient and memory-efficient for 3D hologram generation merely from the knowledge of on-hand 2D image content. We experimentally demonstrate speckle-free and photorealistic holographic 3D displays from a variety of scene images, opening up a way of creating real-time 3D holography from everyday pictures. © 2023 Optical Society of America

          Related collections

          Most cited references16

          • Record: found
          • Abstract: found
          • Article: not found

          Make3D: learning 3D scene structure from a single still image.

          We consider the problem of estimating detailed 3D structure from a single still image of an unstructured environment. Our goal is to create 3D models that are both quantitatively accurate as well as visually pleasing. For each small homogeneous patch in the image, we use a Markov Random Field (MRF) to infer a set of "plane parameters" that capture both the 3D location and 3D orientation of the patch. The MRF, trained via supervised learning, models both image depth cues as well as the relationships between different parts of the image. Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene; this enables the algorithm to capture much more detailed 3D structure than does prior art and also give a much richer experience in the 3D flythroughs created using image-based rendering, even for scenes with significant nonvertical structure. Using this approach, we have created qualitatively correct 3D models for 64.9 percent of 588 images downloaded from the Internet. We have also extended our model to produce large-scale 3D models from a few images.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Towards real-time photorealistic 3D holography with deep neural networks

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Neural holography with camera-in-the-loop training

                Bookmark

                Author and article information

                Contributors
                Journal
                OPLEDP
                Optics Letters
                Opt. Lett.
                Optica Publishing Group
                0146-9592
                1539-4794
                2023
                2023
                February 01 2023
                February 15 2023
                : 48
                : 4
                : 851
                Article
                10.1364/OL.478976
                36790957
                459b74ff-0ec5-430b-b42c-975563118f7a
                © 2023

                https://doi.org/10.1364/OA_License_v2#VOR

                History

                Comments

                Comment on this article