DOI QR코드

DOI QR Code

Improving the quality of light-field data extracted from a hologram using deep learning

  • Dae-youl Park (Digital Holography Research Section, Electronics and Telecommunications Research Institute) ;
  • Joongki Park (Media Research Division, Electronics and Telecommunications Research Institute)
  • Received : 2022.11.30
  • Accepted : 2023.04.10
  • Published : 2024.04.20

Abstract

We propose a method to suppress the speckle noise and blur effects of the light field extracted from a hologram using a deep-learning technique. The light field can be extracted by bandpass filtering in the hologram's frequency domain. The extracted light field has reduced spatial resolution owing to the limited passband size of the bandpass filter and the blurring that occurs when the object is far from the hologram plane and also contains speckle noise caused by the random phase distribution of the three-dimensional object surface. These limitations degrade the reconstruction quality of the hologram resynthesized using the extracted light field. In the proposed method, a deep-learning model based on a generative adversarial network is designed to suppress speckle noise and blurring, resulting in improved quality of the light field extracted from the hologram. The model is trained using pairs of original two-dimensional images and their corresponding light-field data extracted from the complex field generated by the images. Validation of the proposed method is performed using light-field data extracted from holograms of objects with single and multiple depths and mesh-based computer-generated holograms.

Keywords

Acknowledgement

This research was supported by an Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean Government (23ZH1300, Research on Hyper-realistic Interaction Technology for Five Senses and Emotional Experience) and by an Institute of Information & Communications Technology Planning & Evaluation (Institute for Information and Communications Technology Promotion [IITP]) grant funded by the Korean Government (MSIT) (2019-0-00001, Development of Holo-TV Core Technologies for Hologram Media Services).

References

  1. Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, Phase recovery and holographic image reconstruction using deep learning in neural networks, Light Sci. Appl. 7 (2018), 17141.
  2. Z. Ren, Z. Xu, and E. Y. Lam, Autofocusing in digital holography using deep learning, (Proc. SPIE 10499, Three-Dimensional Multidimension. Microsc.: Image Acquis. Process. XXV, SPIE, San Francisco, CA), 2018, pp. 157-164.
  3. H. Zehao, X. Sui, and L. Cao, Holographic 3D display using depth maps generated by 2D-to-3D rendering approach, Appl. Sci. 11 (2011), 9889.
  4. Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, PhaseCam3D-Learning phase masks for passive single view depth estimation, (2019 IEEE Int. Conf. Comput. Photography (ICCP), Tokyo), 2019, pp. 1-12.
  5. T. Pitkaaho, A. Manninen, and T. J. Naughton, Focus prediction in digital holographic microscopy using deep convolutional neural networks, Appl. Optics 58 (2019), A202-A208.
  6. D.-Y. Park and J.-H. Park, Hologram conversion for speckle free reconstruction using light field extraction and deep learning, Opt. Express 28 (2020), 5393-5409.
  7. D.-Y. Park and J.-H. Park, Generation of distortion-free scaled holograms using light field data conversion, Opt. Express 29 (2021), 487-508.
  8. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, Realistic expression for full-parallax computer-generated holograms with the ray-tracing method, Appl. Optics 52 (2013), A201-A209.
  9. Z. Wang, G. Lv, Q. Feng, A. Wang, and H. Ming, Simple and fast calculation algorithm for computer-generated hologram based on integral imaging using look-up table, Opt. Express 26 (2018), 13322-13330.
  10. J.-H. Park and M. Askari, Non-hogel-based computer generated hologram from light field using complex field recovery technique from Wigner distribution function, Opt. Express 27 (2019), 2562-2574.
  11. J.-H. Park, Efficient calculation scheme for high pixel resolution non-hogel-based computer generated hologram from light field, Opt. Express 28 (2020), 6663-6683.
  12. D. Min, K. Min, H.-J. Choi, H. Lee, and J.-H. Park, Non-hogel-based computer generated hologram with occlusion processing between the foreground light field and background hologram, Opt. Express 30 (2022), 38339-38356.
  13. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial networks, Commun. ACM 63 (2020), 139-144.
  14. X. Li, Z. Du, Y. Huang, and Z. Tan, A deep translation (GAN) based change detection network for optical and SAR remote sensing images, ISPRS J. Phot. Remote Sens. 179 (2021), 14-34.
  15. R. A. Khan, Y. Luo, and F.-X. Wu, Multi-scale GAN with residual image learning for removing heterogeneous blur, IET Image Proc. 16 (2022), 2412-2431.
  16. D.-W. Kim, J.-R. Chung, J. Kim, D. Lee, S. Jeong, and S.-W. Jung, Constrained adversarial loss for generative adversarial network-based faithful image restoration, ETRI J. 41 (2019), 415-425.
  17. T. Karras, S. Laine, and T. Aila, A style-based generator architecture for generative adversarial networks, (2019 IEEE/CVF Conf. Comput. Vision Pattern Recognit. (CVPR), Long beach, USA), 2019, pp. 4396-4405.
  18. E. Agustsson and R. Timofte, NTIRE 2017 Challenge on single image super-resolution dataset and study, (2017 IEEE Conf. Comp. Vision Patt. Recog. Workshops (CVPRW), Honolulu, USA), 2017, pp. 1122-1131.
  19. O. Ronneberger, P. Fischer, and T. Brox, U-Net: convolutional networks for biomedical image segmentation, (Proc. Int. Conf. Med. Image Comput. Comput.-Assisted Intervention, Munich, Germany), 2015, pp. 234-241.
  20. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising, IEEE Tran. Imag. Proc. 26 (2017), 3142-3155.
  21. Y. Huang, Z. Lu, Y. Liu, H. Chen, J. Zhou, L. Fang, and Y. Zhang, Noise-powered disentangled representation for unsupervised speckle reduction of optical coherence tomography images, IEEE Tran. Med. Imaging 40 (2021), 2600-2614.
  22. B. Curless and M. Levoy, A volumetric method for building complex models from range images, (Proc. 23rd Annu. Conf. Comput. Graphics Interact. Tech., New Orleans (SIGGRAPH), LA, USA), 1996, pp. 303-312.